diff --git "a/data.jsonl" "b/data.jsonl" --- "a/data.jsonl" +++ "b/data.jsonl" @@ -1,4 +1,4 @@ -{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "559609fe98ec2145788133687e64a6e87766bc77", "is_iss": 0, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/25525", "iss_label": "Bug\nmodule:feature_extraction", "title": "Extend SequentialFeatureSelector example to demonstrate how to use negative tol", "body": "### Describe the bug\r\n\r\nI utilized the **SequentialFeatureSelector** for feature selection in my code, with the direction set to \"backward.\" The tolerance value is negative and the selection process stops when the decrease in the metric, AUC in this case, is less than the specified tolerance. Generally, increasing the number of features results in a higher AUC, but sacrificing some features, especially correlated ones that offer little contribution, can produce a pessimistic model with a lower AUC. The code worked as expected in **sklearn 1.1.1**, but when I updated to **sklearn 1.2.1**, I encountered the following error.\r\n\r\n### Steps/Code to Reproduce\r\n\r\n```python\r\nfrom sklearn.datasets import load_breast_cancer\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom sklearn.feature_selection import SequentialFeatureSelector\r\nfrom sklearn.preprocessing import StandardScaler\r\nfrom sklearn.pipeline import Pipeline\r\n\r\nX, y = load_breast_cancer(return_X_y=True)\r\n\r\nTOL = -0.001\r\nfeature_selector = SequentialFeatureSelector(\r\n LogisticRegression(max_iter=1000),\r\n n_features_to_select=\"auto\",\r\n direction=\"backward\",\r\n scoring=\"roc_auc\",\r\n tol=TOL\r\n )\r\n\r\n\r\npipe = Pipeline(\r\n [('scaler', StandardScaler()), \r\n ('feature_selector', feature_selector), \r\n ('log_reg', LogisticRegression(max_iter=1000))]\r\n )\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n pipe.fit(X, y)\r\n print(pipe['log_reg'].coef_[0])\r\n\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\n$ python sfs_tol.py \r\n[-2.0429818 0.5364346 -1.35765488 -2.85009904 -2.84603016]\r\n```\r\n\r\n### Actual Results\r\n\r\n```python-traceback\r\n$ python sfs_tol.py \r\nTraceback (most recent call last):\r\n File \"/home/modelling/users-workspace/nsofinij/lab/open-source/sfs_tol.py\", line 28, in \r\n pipe.fit(X, y)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 401, in fit\r\n Xt = self._fit(X, y, **fit_params_steps)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 359, in _fit\r\n X, fitted_transformer = fit_transform_one_cached(\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/joblib/memory.py\", line 349, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 893, in _fit_transform_one\r\n res = transformer.fit_transform(X, y, **fit_params)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_set_output.py\", line 142, in wrapped\r\n data_to_wrap = f(self, X, *args, **kwargs)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py\", line 862, in fit_transform\r\n return self.fit(X, y, **fit_params).transform(X)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_selection/_sequential.py\", line 201, in fit\r\n self._validate_params()\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py\", line 581, in _validate_params\r\n validate_parameter_constraints(\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_param_validation.py\", line 97, in validate_parameter_constraints\r\n raise InvalidParameterError(\r\nsklearn.utils._param_validation.InvalidParameterError: The 'tol' parameter of SequentialFeatureSelector must be None or a float in the range (0, inf). Got -0.001 instead.\r\n\r\n```\r\n\r\n### Versions\r\n\r\n```shell\r\nSystem:\r\n python: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:26:04) [GCC 10.4.0]\r\nexecutable: /home/modelling/opt/anaconda3/envs/py310/bin/python\r\n machine: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.26\r\n\r\nPython dependencies:\r\n sklearn: 1.2.1\r\n pip: 23.0\r\n setuptools: 66.1.1\r\n numpy: 1.24.1\r\n scipy: 1.10.0\r\n Cython: None\r\n pandas: 1.5.3\r\n matplotlib: 3.6.3\r\n joblib: 1.2.0\r\nthreadpoolctl: 3.1.0\r\n\r\nBuilt with OpenMP: True\r\n\r\nthreadpoolctl info:\r\n user_api: openmp\r\n internal_api: openmp\r\n prefix: libgomp\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0\r\n version: None\r\n num_threads: 64\r\n\r\n user_api: blas\r\n internal_api: openblas\r\n prefix: libopenblas\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/numpy.libs/libopenblas64_p-r0-15028c96.3.21.so\r\n version: 0.3.21\r\nthreading_layer: pthreads\r\n architecture: SkylakeX\r\n num_threads: 64\r\n\r\n user_api: blas\r\n internal_api: openblas\r\n prefix: libopenblas\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scipy.libs/libopenblasp-r0-41284840.3.18.so\r\n version: 0.3.18\r\nthreading_layer: pthreads\r\n architecture: SkylakeX\r\n num_threads: 64\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/26205", "commit_html_url": null, "file_loc": "{'base_commit': '559609fe98ec2145788133687e64a6e87766bc77', 'files': [{'path': 'examples/feature_selection/plot_select_from_model_diabetes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [145], 'mod': [123, 124, 125]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/feature_selection/plot_select_from_model_diabetes.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "559609fe98ec2145788133687e64a6e87766bc77", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/25525", "iss_label": "Bug\nmodule:feature_extraction", "title": "Extend SequentialFeatureSelector example to demonstrate how to use negative tol", "body": "### Describe the bug\r\n\r\nI utilized the **SequentialFeatureSelector** for feature selection in my code, with the direction set to \"backward.\" The tolerance value is negative and the selection process stops when the decrease in the metric, AUC in this case, is less than the specified tolerance. Generally, increasing the number of features results in a higher AUC, but sacrificing some features, especially correlated ones that offer little contribution, can produce a pessimistic model with a lower AUC. The code worked as expected in **sklearn 1.1.1**, but when I updated to **sklearn 1.2.1**, I encountered the following error.\r\n\r\n### Steps/Code to Reproduce\r\n\r\n```python\r\nfrom sklearn.datasets import load_breast_cancer\r\nfrom sklearn.linear_model import LogisticRegression\r\nfrom sklearn.feature_selection import SequentialFeatureSelector\r\nfrom sklearn.preprocessing import StandardScaler\r\nfrom sklearn.pipeline import Pipeline\r\n\r\nX, y = load_breast_cancer(return_X_y=True)\r\n\r\nTOL = -0.001\r\nfeature_selector = SequentialFeatureSelector(\r\n LogisticRegression(max_iter=1000),\r\n n_features_to_select=\"auto\",\r\n direction=\"backward\",\r\n scoring=\"roc_auc\",\r\n tol=TOL\r\n )\r\n\r\n\r\npipe = Pipeline(\r\n [('scaler', StandardScaler()), \r\n ('feature_selector', feature_selector), \r\n ('log_reg', LogisticRegression(max_iter=1000))]\r\n )\r\n\r\n\r\n\r\nif __name__ == \"__main__\":\r\n pipe.fit(X, y)\r\n print(pipe['log_reg'].coef_[0])\r\n\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\n$ python sfs_tol.py \r\n[-2.0429818 0.5364346 -1.35765488 -2.85009904 -2.84603016]\r\n```\r\n\r\n### Actual Results\r\n\r\n```python-traceback\r\n$ python sfs_tol.py \r\nTraceback (most recent call last):\r\n File \"/home/modelling/users-workspace/nsofinij/lab/open-source/sfs_tol.py\", line 28, in \r\n pipe.fit(X, y)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 401, in fit\r\n Xt = self._fit(X, y, **fit_params_steps)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 359, in _fit\r\n X, fitted_transformer = fit_transform_one_cached(\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/joblib/memory.py\", line 349, in __call__\r\n return self.func(*args, **kwargs)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py\", line 893, in _fit_transform_one\r\n res = transformer.fit_transform(X, y, **fit_params)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_set_output.py\", line 142, in wrapped\r\n data_to_wrap = f(self, X, *args, **kwargs)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py\", line 862, in fit_transform\r\n return self.fit(X, y, **fit_params).transform(X)\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_selection/_sequential.py\", line 201, in fit\r\n self._validate_params()\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py\", line 581, in _validate_params\r\n validate_parameter_constraints(\r\n File \"/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_param_validation.py\", line 97, in validate_parameter_constraints\r\n raise InvalidParameterError(\r\nsklearn.utils._param_validation.InvalidParameterError: The 'tol' parameter of SequentialFeatureSelector must be None or a float in the range (0, inf). Got -0.001 instead.\r\n\r\n```\r\n\r\n### Versions\r\n\r\n```shell\r\nSystem:\r\n python: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:26:04) [GCC 10.4.0]\r\nexecutable: /home/modelling/opt/anaconda3/envs/py310/bin/python\r\n machine: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.26\r\n\r\nPython dependencies:\r\n sklearn: 1.2.1\r\n pip: 23.0\r\n setuptools: 66.1.1\r\n numpy: 1.24.1\r\n scipy: 1.10.0\r\n Cython: None\r\n pandas: 1.5.3\r\n matplotlib: 3.6.3\r\n joblib: 1.2.0\r\nthreadpoolctl: 3.1.0\r\n\r\nBuilt with OpenMP: True\r\n\r\nthreadpoolctl info:\r\n user_api: openmp\r\n internal_api: openmp\r\n prefix: libgomp\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0\r\n version: None\r\n num_threads: 64\r\n\r\n user_api: blas\r\n internal_api: openblas\r\n prefix: libopenblas\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/numpy.libs/libopenblas64_p-r0-15028c96.3.21.so\r\n version: 0.3.21\r\nthreading_layer: pthreads\r\n architecture: SkylakeX\r\n num_threads: 64\r\n\r\n user_api: blas\r\n internal_api: openblas\r\n prefix: libopenblas\r\n filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scipy.libs/libopenblasp-r0-41284840.3.18.so\r\n version: 0.3.18\r\nthreading_layer: pthreads\r\n architecture: SkylakeX\r\n num_threads: 64\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/26205", "commit_html_url": null, "file_loc": "{'base_commit': '559609fe98ec2145788133687e64a6e87766bc77', 'files': [{'path': 'examples/feature_selection/plot_select_from_model_diabetes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [145], 'mod': [123, 124, 125]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/feature_selection/plot_select_from_model_diabetes.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "pallets", "repo_name": "flask", "base_commit": "cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2264", "iss_label": "cli", "title": "Handle app factory in FLASK_APP", "body": "`FLASK_APP=myproject.app:create_app('dev')`\r\n[\r\nGunicorn does this with `eval`](https://github.com/benoitc/gunicorn/blob/fbd151e9841e2c87a18512d71475bcff863a5171/gunicorn/util.py#L364), which I'm not super happy with. Instead, we could use `literal_eval` to allow a simple list of arguments. The line should never be so complicated that `eval` would be necessary anyway.\r\n\r\n~~~python\r\n# might need to fix this regex\r\nm = re.search(r'(\\w+)(\\(.*\\))', app_obj)\r\n\r\nif m:\r\n app = getattr(mod, m.group(1))(*literal_eval(m.group(2)))\r\n~~~", "pr_html_url": "https://github.com/pallets/flask/pull/2326", "file_loc": "{'base_commit': 'cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4', 'files': [{'path': 'flask/cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 12]}, \"(None, 'find_best_app', 32)\": {'mod': [58, 62, 69, 71]}, \"(None, 'call_factory', 82)\": {'mod': [82, 83, 84, 85, 86, 88, 89, 90, 91, 92, 93]}, \"(None, 'locate_app', 125)\": {'mod': [151, 153, 154, 155, 156, 158]}}}, {'path': 'tests/test_cli.py', 'status': 'modified', 'Loc': {\"(None, 'test_locate_app', 148)\": {'add': [152], 'mod': [154, 155, 156, 157, 158, 159, 160, 161]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["flask/cli.py"], "doc": [], "test": ["tests/test_cli.py"], "config": [], "asset": []}} {"organization": "localstack", "repo_name": "localstack", "base_commit": "737ca72b7bce6e377dd6876eacee63338fa8c30c", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/894", "iss_label": "", "title": "ERROR:localstack.services.generic_proxy: Error forwarding request:", "body": "Starting local dev environment. CTRL-C to quit.\r\nStarting mock API Gateway (http port 4567)...\r\nStarting mock DynamoDB (http port 4569)...\r\nStarting mock SES (http port 4579)...\r\nStarting mock Kinesis (http port 4568)...\r\nStarting mock Redshift (http port 4577)...\r\nStarting mock S3 (http port 4572)...\r\nStarting mock CloudWatch (http port 4582)...\r\nStarting mock CloudFormation (http port 4581)...\r\nStarting mock SSM (http port 4583)...\r\nStarting mock SQS (http port 4576)...\r\nStarting local Elasticsearch (http port 4571)...\r\nStarting mock SNS (http port 4575)...\r\nStarting mock DynamoDB Streams service (http port 4570)...\r\nStarting mock Firehose service (http port 4573)...\r\nStarting mock Route53 (http port 4580)...\r\nStarting mock ES service (http port 4578)...\r\nStarting mock Lambda service (http port 4574)...\r\n2018-08-11T13:33:08:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py\", line 201, in forward\r\n headers=forward_headers)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py\", line 508, in send\r\n raise ConnectionError(e, request=request)\r\nConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\r\n\r\n2018-08-11T13:34:08:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py\", line 201, in forward\r\n headers=forward_headers)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py\", line 508, in send\r\n raise ConnectionError(e, request=request)\r\nConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\r\n\r\n2018-08-11T13:35:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py\", line 201, in forward\r\n headers=forward_headers)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py\", line 508, in send\r\n raise ConnectionError(e, request=request)\r\nConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\r\n\r\n2018-08-11T13:36:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py\", line 201, in forward\r\n headers=forward_headers)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py\", line 508, in send\r\n raise ConnectionError(e, request=request)\r\nConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',))\r\n", "pr_html_url": "https://github.com/localstack/localstack/pull/1526", "file_loc": "{'base_commit': '737ca72b7bce6e377dd6876eacee63338fa8c30c', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [186]}}}, {'path': 'localstack/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'localstack/services/kinesis/kinesis_starter.py', 'status': 'modified', 'Loc': {\"(None, 'start_kinesis', 14)\": {'add': [17], 'mod': [14, 23, 24]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack/config.py", "localstack/services/kinesis/kinesis_starter.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "d2871b29754abd0f72cf42c299bb1c041519f7bc", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/30", "iss_label": "", "title": "[Feature request] Add example of finetuning the pretrained models on custom corpus", "body": "", "pr_html_url": "https://github.com/huggingface/transformers/pull/25107", "file_loc": "{'base_commit': 'd2871b29754abd0f72cf42c299bb1c041519f7bc', 'files': [{'path': 'src/transformers/modeling_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [75, 108]}, \"('PreTrainedModel', 'from_pretrained', 1959)\": {'add': [2227]}, \"(None, 'load_state_dict', 442)\": {'mod': [461]}, \"('PreTrainedModel', '_load_pretrained_model', 3095)\": {'mod': [3183, 3388, 3389, 3390, 3391, 3392, 3393, 3394, 3395, 3396, 3397, 3398, 3399, 3400, 3401, 3402, 3403, 3404]}}}, {'path': 'src/transformers/trainer.py', 'status': 'modified', 'Loc': {\"('Trainer', '__init__', 313)\": {'mod': [468, 469, 470]}, \"('Trainer', '_wrap_model', 1316)\": {'mod': [1382, 1385, 1387]}, \"('Trainer', 'train', 1453)\": {'mod': [1520]}, \"('Trainer', '_inner_training_loop', 1552)\": {'mod': [1654]}, \"('Trainer', 'create_accelerator_and_postprocess', 3866)\": {'mod': [3889]}}}, {'path': 'src/transformers/training_args.py', 'status': 'modified', 'Loc': {\"('TrainingArguments', None, 158)\": {'add': [464], 'mod': [439, 442, 445, 457]}, \"('TrainingArguments', '__post_init__', 1221)\": {'add': [1522, 1524, 1585], 'mod': [1529, 1530, 1531, 1533, 1534, 1535, 1536, 1537, 1543, 1544, 1547, 1548, 1550, 1551, 1555, 1556, 1558, 1559, 1560, 1589, 1591, 1593, 1594, 1595, 1596, 1597, 1598, 1599, 1602]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/trainer.py", "src/transformers/modeling_utils.py", "src/transformers/training_args.py"], "doc": [], "test": [], "config": [], "asset": []}} @@ -7,22 +7,22 @@ {"organization": "huggingface", "repo_name": "transformers", "base_commit": "9fef668338b15e508bac99598dd139546fece00b", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/9", "iss_label": "", "title": "Crash at the end of training", "body": "Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:\r\n\r\nI was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8\r\n\r\nIs this an issue you know about?\r\n```\r\n11/08/2018 17:50:03 - INFO - __main__ - device cuda n_gpu 1 distributed training False\r\n11/08/2018 17:50:18 - INFO - __main__ - *** Example ***\r\n11/08/2018 17:50:18 - INFO - __main__ - unique_id: 1000000000\r\n11/08/2018 17:50:18 - INFO - __main__ - example_index: 0\r\n11/08/2018 17:50:18 - INFO - __main__ - doc_span_index: 0\r\n11/08/2018 17:50:18 - INFO - __main__ - tokens: [CLS] to whom did the virgin mary allegedly appear in 1858 in lou ##rdes france ? [SEP] architectural ##ly , the school has a catholic character . atop the main building ' s gold dome is a golden statue of the virgin mary . immediately in front of the main building and facing it , is a copper statue of christ with arms up ##rai ##sed with the legend \" ve ##ni ##te ad me om ##nes \" . next to the main building is the basilica of the sacred heart . immediately behind the basilica is the gr ##otto , a marian place of prayer and reflection . it is a replica of the gr ##otto at lou ##rdes , france where the virgin mary reputed ##ly appeared to saint bern ##ade ##tte so ##ub ##iro ##us in 1858 . at the end of the main drive ( and in a direct line that connects through 3 statues and the gold dome ) , is a simple , modern stone statue of mary . [SEP]\r\n11/08/2018 17:50:18 - INFO - __main__ - token_to_orig_map: 17:0 18:0 19:0 20:1 21:2 22:3 23:4 24:5 25:6 26:6 27:7 28:8 29:9 30:10 31:10 32:10 33:11 34:12 35:13 36:14 37:15 38:16 39:17 40:18 41:19 42:20 43:20 44:21 45:22 46:23 47:24 48:25 49:26 50:27 51:28 52:29 53:30 54:30 55:31 56:32 57:33 58:34 59:35 60:36 61:37 62:38 63:39 64:39 65:39 66:40 67:41 68:42 69:43 70:43 71:43 72:43 73:44 74:45 75:46 76:46 77:46 78:46 79:47 80:48 81:49 82:50 83:51 84:52 85:53 86:54 87:55 88:56 89:57 90:58 91:58 92:59 93:60 94:61 95:62 96:63 97:64 98:65 99:65 100:65 101:66 102:67 103:68 104:69 105:70 106:71 107:72 108:72 109:73 110:74 111:75 112:76 113:77 114:78 115:79 116:79 117:80 118:81 119:81 120:81 121:82 122:83 123:84 124:85 125:86 126:87 127:87 128:88 129:89 130:90 131:91 132:91 133:91 134:92 135:92 136:92 137:92 138:93 139:94 140:94 141:95 142:96 143:97 144:98 145:99 146:100 147:101 148:102 149:102 150:103 151:104 152:105 153:106 154:107 155:108 156:109 157:110 158:111 159:112 160:113 161:114 162:115 163:115 164:115 165:116 166:117 167:118 168:118 169:119 170:120 171:121 172:122 173:123 174:123\r\n11/08/2018 17:50:18 - INFO - __main__ - token_is_max_context: 17:True 18:True 19:True 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 49:True 50:True 51:True 52:True 53:True 54:True 55:True 56:True 57:True 58:True 59:True 60:True 61:True 62:True 63:True 64:True 65:True 66:True 67:True 68:True 69:True 70:True 71:True 72:True 73:True 74:True 75:True 76:True 77:True 78:True 79:True 80:True 81:True 82:True 83:True 84:True 85:True 86:True 87:True 88:True 89:True 90:True 91:True 92:True 93:True 94:True 95:True 96:True 97:True 98:True 99:True 100:True 101:True 102:True 103:True 104:True 105:True 106:True 107:True 108:True 109:True 110:True 111:True 112:True 113:True 114:True 115:True 116:True 117:True 118:True 119:True 120:True 121:True 122:True 123:True 124:True 125:True 126:True 127:True 128:True 129:True 130:True 131:True 132:True 133:True 134:True 135:True 136:True 137:True 138:True 139:True 140:True 141:True 142:True 143:True 144:True 145:True 146:True 147:True 148:True 149:True 150:True 151:True 152:True 153:True 154:True 155:True 156:True 157:True 158:True 159:True 160:True 161:True 162:True 163:True 164:True 165:True 166:True 167:True 168:True 169:True 170:True 171:True 172:True 173:True 174:True\r\n11/08/2018 17:50:18 - INFO - __main__ - input_ids: 101 2000 3183 2106 1996 6261 2984 9382 3711 1999 8517 1999 10223 26371 2605 1029 102 6549 2135 1010 1996 2082 2038 1037 3234 2839 1012 10234 1996 2364 2311 1005 1055 2751 8514 2003 1037 3585 6231 1997 1996 6261 2984 1012 3202 1999 2392 1997 1996 2364 2311 1998 5307 2009 1010 2003 1037 6967 6231 1997 4828 2007 2608 2039 14995 6924 2007 1996 5722 1000 2310 3490 2618 4748 2033 18168 5267 1000 1012 2279 2000 1996 2364 2311 2003 1996 13546 1997 1996 6730 2540 1012 3202 2369 1996 13546 2003 1996 24665 23052 1010 1037 14042 2173 1997 7083 1998 9185 1012 2009 2003 1037 15059 1997 1996 24665 23052 2012 10223 26371 1010 2605 2073 1996 6261 2984 22353 2135 2596 2000 3002 16595 9648 4674 2061 12083 9711 2271 1999 8517 1012 2012 1996 2203 1997 1996 2364 3298 1006 1998 1999 1037 3622 2240 2008 8539 2083 1017 11342 1998 1996 2751 8514 1007 1010 2003 1037 3722 1010 2715 2962 6231 1997 2984 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\r\n11/08/2018 17:50:18 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0\r\n\r\n... [truncated] ...\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29314/29324 [3:27:55<00:04, 2.36it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29315/29324 [3:27:55<00:03, 2.44it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29316/29324 [3:27:56<00:03, 2.26it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29317/29324 [3:27:56<00:02, 2.35it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29318/29324 [3:27:56<00:02, 2.44it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29319/29324 [3:27:57<00:02, 2.25it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29320/29324 [3:27:57<00:01, 2.35it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29321/29324 [3:27:58<00:01, 2.41it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29322/29324 [3:27:58<00:00, 2.25it/s]\u001b[A\r\n\r\nIteration: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2589| 29323/29324 [3:27:59<00:00, 2.36it/s]\u001b[ATraceback (most recent call last):\r\n File \"code/run_squad.py\", line 929, in \r\n main()\r\n File \"code/run_squad.py\", line 862, in main\r\n loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 477, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/0x0d4ff90d01fa4168983197b17d73bb0c_dependencies/code/modeling.py\", line 467, in forward\r\n start_loss = loss_fct(start_logits, start_positions)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py\", line 477, in __call__\r\n result = self.forward(*input, **kwargs)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py\", line 862, in forward\r\n ignore_index=self.ignore_index, reduction=self.reduction)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py\", line 1550, in cross_entropy\r\n return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)\r\n File \"/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py\", line 1403, in nll_loss\r\n if input.size(0) != target.size(0):\r\nRuntimeError: dimension specified as 0 but tensor has no dimensions\r\n\r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py\", line 931, in __del__\r\n self.close()\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py\", line 1133, in close\r\n self._decr_instances(self)\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py\", line 496, in _decr_instances\r\n cls.monitor.exit()\r\n File \"/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py\", line 52, in exit\r\n self.join()\r\n File \"/usr/lib/python3.6/threading.py\", line 1053, in join\r\n raise RuntimeError(\"cannot join current thread\")\r\nRuntimeError: cannot join current thread\r\n```", "pr_html_url": "https://github.com/huggingface/transformers/pull/16310", "file_loc": "{'base_commit': '9fef668338b15e508bac99598dd139546fece00b', 'files': [{'path': 'tests/big_bird/test_modeling_big_bird.py', 'status': 'modified', 'Loc': {\"('BigBirdModelTester', '__init__', 47)\": {'mod': [73]}, \"('BigBirdModelTest', 'test_fast_integration', 561)\": {'mod': [584]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["tests/big_bird/test_modeling_big_bird.py"], "config": [], "asset": []}} {"organization": "psf", "repo_name": "requests", "base_commit": "ccabcf1fca906bfa6b65a3189c1c41061e6c1042", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/3698", "iss_label": "", "title": "AttributeError: 'NoneType' object has no attribute 'read'", "body": "Hello :)\r\n\r\nAfter a recent upgrade for our [coala](https://github.com/coala/coala) project to `requests` 2.12.1 we encounter an exception in our test suites which seems to be caused by `requests`.\r\n\r\nBuild: https://ci.appveyor.com/project/coala/coala-bears/build/1.0.3537/job/1wm7b4u9yhgkxkgn\r\n\r\nRelevant part:\r\n```\r\n================================== FAILURES ===================================\r\n_________________ InvalidLinkBearTest.test_redirect_threshold _________________\r\nself = \r\n def test_redirect_threshold(self):\r\n \r\n long_url_redirect = \"\"\"\r\n https://bitbucket.org/api/301\r\n https://bitbucket.org/api/302\r\n \"\"\".splitlines()\r\n \r\n short_url_redirect = \"\"\"\r\n http://httpbin.org/status/301\r\n \"\"\".splitlines()\r\n \r\n self.assertResult(valid_file=long_url_redirect,\r\n invalid_file=short_url_redirect,\r\n> settings={'follow_redirects': 'yeah'})\r\ntests\\general\\InvalidLinkBearTest.py:157: \r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\ntests\\general\\InvalidLinkBearTest.py:75: in assertResult\r\n out = list(uut.run(\"valid\", valid_file, **settings))\r\nbears\\general\\InvalidLinkBear.py:80: in run\r\n file, timeout, link_ignore_regex):\r\nbears\\general\\InvalidLinkBear.py:53: in find_links_in_file\r\n code = InvalidLinkBear.get_status_code(link, timeout)\r\nbears\\general\\InvalidLinkBear.py:37: in get_status_code\r\n timeout=timeout).status_code\r\nC:\\Python34\\lib\\site-packages\\requests\\api.py:96: in head\r\n return request('head', url, **kwargs)\r\nC:\\Python34\\lib\\site-packages\\requests\\api.py:56: in request\r\n return session.request(method=method, url=url, **kwargs)\r\nC:\\Python34\\lib\\site-packages\\requests\\sessions.py:488: in request\r\n resp = self.send(prep, **send_kwargs)\r\nC:\\Python34\\lib\\site-packages\\requests_mock\\mocker.py:69: in _fake_send\r\n return self._real_send(session, request, **kwargs)\r\nC:\\Python34\\lib\\site-packages\\requests\\sessions.py:641: in send\r\n r.content\r\nC:\\Python34\\lib\\site-packages\\requests\\models.py:772: in content\r\n self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()\r\n_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _\r\n def generate():\r\n # Special case for urllib3.\r\n if hasattr(self.raw, 'stream'):\r\n try:\r\n for chunk in self.raw.stream(chunk_size, decode_content=True):\r\n yield chunk\r\n except ProtocolError as e:\r\n raise ChunkedEncodingError(e)\r\n except DecodeError as e:\r\n raise ContentDecodingError(e)\r\n except ReadTimeoutError as e:\r\n raise ConnectionError(e)\r\n else:\r\n # Standard file-like object.\r\n while True:\r\n> chunk = self.raw.read(chunk_size)\r\nE AttributeError: 'NoneType' object has no attribute 'read'\r\nC:\\Python34\\lib\\site-packages\\requests\\models.py:705: AttributeError\r\n```\r\nhappens on Windows and Linux.\r\n\r\nThanks in advance :)", "pr_html_url": "https://github.com/psf/requests/pull/3718", "file_loc": "{'base_commit': 'ccabcf1fca906bfa6b65a3189c1c41061e6c1042', 'files': [{'path': 'requests/models.py', 'status': 'modified', 'Loc': {\"('Response', 'content', 763)\": {'mod': [772]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {\"('TestRequests', None, 55)\": {'add': [1096]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/models.py"], "doc": [], "test": ["tests/test_requests.py"], "config": [], "asset": []}} {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "fc805074be7b3b507bc1699e537f9b691c6f91b9", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/674", "iss_label": "bug\ndocumentation", "title": "ModuleNotFoundError: No module named 'tkinter'", "body": "**Bug description**\r\nWhen running `gpt-engineer --improve` (using the recent version from PyPI), I get the following output:\r\n\r\n```\r\n$ gpt-engineer --improve\r\nTraceback (most recent call last):\r\n File \"/home/.../.local/bin/gpt-engineer\", line 5, in \r\n from gpt_engineer.main import app\r\n File \"/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/main.py\", line 12, in \r\n from gpt_engineer.collect import collect_learnings\r\n File \"/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/collect.py\", line 5, in \r\n from gpt_engineer import steps\r\n File \"/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/steps.py\", line 19, in \r\n from gpt_engineer.file_selector import FILE_LIST_NAME, ask_for_files\r\n File \"/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/file_selector.py\", line 4, in \r\n import tkinter as tk\r\nModuleNotFoundError: No module named 'tkinter'\r\n```\r\n\r\n\r\n**Expected behavior**\r\nNo error.\r\n\r\nIn https://github.com/AntonOsika/gpt-engineer/pull/465, no changes where made to the required packages, so tkinter might be added there. (Or made optional.)\r\n\r\nEDIT: The error happens always, regardless of the command line parameter.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/675", "file_loc": "{'base_commit': 'fc805074be7b3b507bc1699e537f9b691c6f91b9', 'files': [{'path': 'docs/installation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docs/installation.rst"], "test": [], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "85dce2c836fe03aefc07b7f4e0aec575e170f1cd", "is_iss": 0, "iss_html_url": "https://github.com/pallets/flask/issues/593", "iss_label": "blueprints", "title": "Nestable blueprints", "body": "I'd like to be able to register \"sub-blueprints\" using `Blueprint.register_blueprint(*args, **kwargs)`. This would register the nested blueprints with an app when the \"parent\" is registered with it. All parameters are preserved, other than `url_prefix`, which is handled similarly to in `add_url_rule`. A na\u00edve implementation could look like this:\n\n``` python\nclass Blueprint(object):\n ...\n\n def register_blueprint(self, blueprint, **options):\n def deferred(state):\n url_prefix = options.get('url_prefix')\n if url_prefix is None:\n url_prefix = blueprint.url_prefix\n if 'url_prefix' in options:\n del options['url_prefix']\n\n state.app.register_blueprint(blueprint, url_prefix, **options)\n self.record(deferred)\n```\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/3923", "commit_html_url": null, "file_loc": "{'base_commit': '85dce2c836fe03aefc07b7f4e0aec575e170f1cd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 71)': {'add': [71]}}}, {'path': 'docs/blueprints.rst', 'status': 'modified', 'Loc': {'(None, None, 122)': {'add': [122]}}}, {'path': 'src/flask/app.py', 'status': 'modified', 'Loc': {\"('Flask', '__call__', 1982)\": {'add': [1987]}, \"('Flask', 'update_template_context', 712)\": {'mod': [726, 727, 728]}, \"('Flask', 'register_blueprint', 971)\": {'mod': [990, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1004]}, \"('Flask', '_find_error_handler', 1230)\": {'mod': [1238, 1239, 1240, 1241, 1242, 1243, 1244]}, \"('Flask', 'preprocess_request', 1741)\": {'mod': [1752, 1755, 1756, 1761, 1762]}, \"('Flask', 'process_response', 1768)\": {'mod': [1782, 1784, 1785]}, \"('Flask', 'do_teardown_request', 1794)\": {'mod': [1818, 1819, 1820]}}}, {'path': 'src/flask/blueprints.py', 'status': 'modified', 'Loc': {\"('BlueprintSetupState', '__init__', 16)\": {'add': [47]}, \"('Blueprint', '__init__', 141)\": {'add': [170]}, \"('Blueprint', 'register', 213)\": {'add': [225], 'mod': [281, 282, 286, 287, 288, 289, 290, 291, 292, 293]}, \"('BlueprintSetupState', 'add_url_rule', 53)\": {'mod': [71]}, \"('Blueprint', None, 78)\": {'mod': [213]}}}, {'path': 'tests/test_blueprints.py', 'status': 'modified', 'Loc': {\"(None, 'test_app_url_processors', 828)\": {'add': [852]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/flask/blueprints.py", "src/flask/app.py"], "doc": ["docs/blueprints.rst", "CHANGES.rst"], "test": ["tests/test_blueprints.py"], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "85dce2c836fe03aefc07b7f4e0aec575e170f1cd", "iss_html_url": "https://github.com/pallets/flask/issues/593", "iss_label": "blueprints", "title": "Nestable blueprints", "body": "I'd like to be able to register \"sub-blueprints\" using `Blueprint.register_blueprint(*args, **kwargs)`. This would register the nested blueprints with an app when the \"parent\" is registered with it. All parameters are preserved, other than `url_prefix`, which is handled similarly to in `add_url_rule`. A na\u00edve implementation could look like this:\n\n``` python\nclass Blueprint(object):\n ...\n\n def register_blueprint(self, blueprint, **options):\n def deferred(state):\n url_prefix = options.get('url_prefix')\n if url_prefix is None:\n url_prefix = blueprint.url_prefix\n if 'url_prefix' in options:\n del options['url_prefix']\n\n state.app.register_blueprint(blueprint, url_prefix, **options)\n self.record(deferred)\n```\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/3923", "commit_html_url": null, "file_loc": "{'base_commit': '85dce2c836fe03aefc07b7f4e0aec575e170f1cd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 71)': {'add': [71]}}}, {'path': 'docs/blueprints.rst', 'status': 'modified', 'Loc': {'(None, None, 122)': {'add': [122]}}}, {'path': 'src/flask/app.py', 'status': 'modified', 'Loc': {\"('Flask', '__call__', 1982)\": {'add': [1987]}, \"('Flask', 'update_template_context', 712)\": {'mod': [726, 727, 728]}, \"('Flask', 'register_blueprint', 971)\": {'mod': [990, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1004]}, \"('Flask', '_find_error_handler', 1230)\": {'mod': [1238, 1239, 1240, 1241, 1242, 1243, 1244]}, \"('Flask', 'preprocess_request', 1741)\": {'mod': [1752, 1755, 1756, 1761, 1762]}, \"('Flask', 'process_response', 1768)\": {'mod': [1782, 1784, 1785]}, \"('Flask', 'do_teardown_request', 1794)\": {'mod': [1818, 1819, 1820]}}}, {'path': 'src/flask/blueprints.py', 'status': 'modified', 'Loc': {\"('BlueprintSetupState', '__init__', 16)\": {'add': [47]}, \"('Blueprint', '__init__', 141)\": {'add': [170]}, \"('Blueprint', 'register', 213)\": {'add': [225], 'mod': [281, 282, 286, 287, 288, 289, 290, 291, 292, 293]}, \"('BlueprintSetupState', 'add_url_rule', 53)\": {'mod': [71]}, \"('Blueprint', None, 78)\": {'mod': [213]}}}, {'path': 'tests/test_blueprints.py', 'status': 'modified', 'Loc': {\"(None, 'test_app_url_processors', 828)\": {'add': [852]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/flask/blueprints.py", "src/flask/app.py"], "doc": ["docs/blueprints.rst", "CHANGES.rst"], "test": ["tests/test_blueprints.py"], "config": [], "asset": []}} {"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "f92d61497a426a19818625c3ccdaae9beeb82b31", "iss_has_pr": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14263", "iss_label": "bug", "title": "[Bug]: KeyError: \"do_not_save\" when trying to save a prompt", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nWhen I try to save a prompt, it errors in the console saying\r\n```\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py\", line 212, in save_styles\r\n style_paths.remove(\"do_not_save\")\r\nKeyError: 'do_not_save'\r\n```\r\nand the file is not modified\r\nI manually commented it out and it doesn't seem to break anything, except that it is saved to styles.csv.csv instead of styles.csv\n\n### Steps to reproduce the problem\n\nTry to save a prompt\r\n\n\n### What should have happened?\n\nSave into style.csv with no error\n\n### Sysinfo\n\n{\r\n \"Platform\": \"Linux-6.6.4-zen1-1-zen-x86_64-with-glibc2.38\",\r\n \"Python\": \"3.11.4\",\r\n \"Version\": \"v1.7.0-RC-5-gf92d6149\",\r\n \"Commit\": \"f92d61497a426a19818625c3ccdaae9beeb82b31\",\r\n \"Script path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui\",\r\n \"Data path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui\",\r\n \"Extensions dir\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions\",\r\n \"Checksum\": \"e15aad6adb98a2a0ad13cad2b45b61b03565ef4f258783021da82b4ef7f37fa9\",\r\n \"Commandline\": [\r\n \"launch.py\"\r\n ],\r\n \"Torch env info\": {\r\n \"torch_version\": \"2.2.0\",\r\n \"is_debug_build\": \"False\",\r\n \"cuda_compiled_version\": \"N/A\",\r\n \"gcc_version\": \"(GCC) 13.2.1 20230801\",\r\n \"clang_version\": \"16.0.6\",\r\n \"cmake_version\": \"version 3.26.4\",\r\n \"os\": \"Arch Linux (x86_64)\",\r\n \"libc_version\": \"glibc-2.38\",\r\n \"python_version\": \"3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)\",\r\n \"python_platform\": \"Linux-6.6.4-zen1-1-zen-x86_64-with-glibc2.38\",\r\n \"is_cuda_available\": \"True\",\r\n \"cuda_runtime_version\": null,\r\n \"cuda_module_loading\": \"LAZY\",\r\n \"nvidia_driver_version\": null,\r\n \"nvidia_gpu_models\": \"AMD Radeon RX 7900 XTX (gfx1100)\",\r\n \"cudnn_version\": null,\r\n \"pip_version\": \"pip3\",\r\n \"pip_packages\": [\r\n \"numpy==1.23.5\",\r\n \"open-clip-torch==2.20.0\",\r\n \"pytorch-lightning==1.9.4\",\r\n \"pytorch-triton-rocm==2.1.0+dafe145982\",\r\n \"torch==2.2.0.dev20231208+rocm5.6\",\r\n \"torchdiffeq==0.2.3\",\r\n \"torchmetrics==1.2.1\",\r\n \"torchsde==0.2.6\",\r\n \"torchvision==0.17.0.dev20231208+rocm5.6\"\r\n ],\r\n \"conda_packages\": [\r\n \"numpy 1.26.2 py311h24aa872_0 \",\r\n \"numpy-base 1.26.2 py311hbfb1bba_0 \",\r\n \"open-clip-torch 2.20.0 pypi_0 pypi\",\r\n \"pytorch-lightning 1.9.4 pypi_0 pypi\",\r\n \"pytorch-triton-rocm 2.1.0+dafe145982 pypi_0 pypi\",\r\n \"torch 2.2.0.dev20231208+rocm5.7 pypi_0 pypi\",\r\n \"torchaudio 2.2.0.dev20231208+rocm5.7 pypi_0 pypi\",\r\n \"torchdiffeq 0.2.3 pypi_0 pypi\",\r\n \"torchmetrics 1.2.1 pypi_0 pypi\",\r\n \"torchsde 0.2.5 pypi_0 pypi\",\r\n \"torchvision 0.17.0.dev20231208+rocm5.7 pypi_0 pypi\"\r\n ],\r\n \"hip_compiled_version\": \"5.6.31061-8c743ae5d\",\r\n \"hip_runtime_version\": \"5.6.31061\",\r\n \"miopen_runtime_version\": \"2.20.0\",\r\n \"caching_allocator_config\": \"\",\r\n \"is_xnnpack_available\": \"True\",\r\n \"cpu_info\": [\r\n \"Architecture: x86_64\",\r\n \"CPU op-mode(s): 32-bit, 64-bit\",\r\n \"Address sizes: 48 bits physical, 48 bits virtual\",\r\n \"Byte Order: Little Endian\",\r\n \"CPU(s): 32\",\r\n \"On-line CPU(s) list: 0-31\",\r\n \"Vendor ID: AuthenticAMD\",\r\n \"Model name: AMD Ryzen 9 5950X 16-Core Processor\",\r\n \"CPU family: 25\",\r\n \"Model: 33\",\r\n \"Thread(s) per core: 2\",\r\n \"Core(s) per socket: 16\",\r\n \"Socket(s): 1\",\r\n \"Stepping: 0\",\r\n \"Frequency boost: disabled\",\r\n \"CPU(s) scaling MHz: 49%\",\r\n \"CPU max MHz: 6279.4922\",\r\n \"CPU min MHz: 2200.0000\",\r\n \"BogoMIPS: 8383.88\",\r\n \"Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap\",\r\n \"Virtualization: AMD-V\",\r\n \"L1d cache: 512 KiB (16 instances)\",\r\n \"L1i cache: 512 KiB (16 instances)\",\r\n \"L2 cache: 8 MiB (16 instances)\",\r\n \"L3 cache: 64 MiB (2 instances)\",\r\n \"NUMA node(s): 1\",\r\n \"NUMA node0 CPU(s): 0-31\",\r\n \"Vulnerability Gather data sampling: Not affected\",\r\n \"Vulnerability Itlb multihit: Not affected\",\r\n \"Vulnerability L1tf: Not affected\",\r\n \"Vulnerability Mds: Not affected\",\r\n \"Vulnerability Meltdown: Not affected\",\r\n \"Vulnerability Mmio stale data: Not affected\",\r\n \"Vulnerability Retbleed: Not affected\",\r\n \"Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode\",\r\n \"Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\",\r\n \"Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization\",\r\n \"Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected\",\r\n \"Vulnerability Srbds: Not affected\",\r\n \"Vulnerability Tsx async abort: Not affected\"\r\n ]\r\n },\r\n \"Exceptions\": [],\r\n \"CPU\": {\r\n \"model\": \"\",\r\n \"count logical\": 32,\r\n \"count physical\": 16\r\n },\r\n \"RAM\": {\r\n \"total\": \"31GB\",\r\n \"used\": \"6GB\",\r\n \"free\": \"20GB\",\r\n \"active\": \"7GB\",\r\n \"inactive\": \"2GB\",\r\n \"buffers\": \"172MB\",\r\n \"cached\": \"5GB\",\r\n \"shared\": \"199MB\"\r\n },\r\n \"Extensions\": [\r\n {\r\n \"name\": \"clip-interrogator-ext\",\r\n \"path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/clip-interrogator-ext\",\r\n \"version\": \"0f1a4591\",\r\n \"branch\": \"main\",\r\n \"remote\": \"https://github.com/pharmapsychotic/clip-interrogator-ext.git\"\r\n },\r\n {\r\n \"name\": \"latent-upscale\",\r\n \"path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/latent-upscale\",\r\n \"version\": \"b9f75f44\",\r\n \"branch\": \"main\",\r\n \"remote\": \"https://github.com/feynlee/latent-upscale.git\"\r\n },\r\n {\r\n \"name\": \"sd-webui-controlnet\",\r\n \"path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/sd-webui-controlnet\",\r\n \"version\": \"feea1f65\",\r\n \"branch\": \"main\",\r\n \"remote\": \"https://github.com/Mikubill/sd-webui-controlnet.git\"\r\n },\r\n {\r\n \"name\": \"ultimate-upscale-for-automatic1111\",\r\n \"path\": \"/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/ultimate-upscale-for-automatic1111\",\r\n \"version\": \"728ffcec\",\r\n \"branch\": \"master\",\r\n \"remote\": \"https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git\"\r\n }\r\n ],\r\n \"Inactive extensions\": [],\r\n \"Environment\": {\r\n \"GIT\": \"git\",\r\n \"GRADIO_ANALYTICS_ENABLED\": \"False\",\r\n \"TORCH_COMMAND\": \"pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm5.6\"\r\n },\r\n \"Config\": {\r\n \"samples_save\": true,\r\n \"samples_format\": \"png\",\r\n \"samples_filename_pattern\": \"\",\r\n \"save_images_add_number\": true,\r\n \"save_images_replace_action\": \"Replace\",\r\n \"grid_save\": true,\r\n \"grid_format\": \"png\",\r\n \"grid_extended_filename\": false,\r\n \"grid_only_if_multiple\": true,\r\n \"grid_prevent_empty_spots\": false,\r\n \"grid_zip_filename_pattern\": \"\",\r\n \"n_rows\": -1,\r\n \"font\": \"\",\r\n \"grid_text_active_color\": \"#000000\",\r\n \"grid_text_inactive_color\": \"#999999\",\r\n \"grid_background_color\": \"#ffffff\",\r\n \"save_images_before_face_restoration\": false,\r\n \"save_images_before_highres_fix\": false,\r\n \"save_images_before_color_correction\": false,\r\n \"save_mask\": false,\r\n \"save_mask_composite\": false,\r\n \"jpeg_quality\": 80,\r\n \"webp_lossless\": false,\r\n \"export_for_4chan\": true,\r\n \"img_downscale_threshold\": 4.0,\r\n \"target_side_length\": 4000,\r\n \"img_max_size_mp\": 200,\r\n \"use_original_name_batch\": true,\r\n \"use_upscaler_name_as_suffix\": false,\r\n \"save_selected_only\": true,\r\n \"save_init_img\": false,\r\n \"temp_dir\": \"\",\r\n \"clean_temp_dir_at_start\": false,\r\n \"save_incomplete_images\": false,\r\n \"notification_audio\": true,\r\n \"notification_volume\": 100,\r\n \"outdir_samples\": \"\",\r\n \"outdir_txt2img_samples\": \"outputs/txt2img-images\",\r\n \"outdir_img2img_samples\": \"outputs/img2img-images\",\r\n \"outdir_extras_samples\": \"outputs/extras-images\",\r\n \"outdir_grids\": \"\",\r\n \"outdir_txt2img_grids\": \"outputs/txt2img-grids\",\r\n \"outdir_img2img_grids\": \"outputs/img2img-grids\",\r\n \"outdir_save\": \"log/images\",\r\n \"outdir_init_images\": \"outputs/init-images\",\r\n \"save_to_dirs\": true,\r\n \"grid_save_to_dirs\": true,\r\n \"use_save_to_dirs_for_ui\": false,\r\n \"directories_filename_pattern\": \"[date]\",\r\n \"directories_max_prompt_words\": 8,\r\n \"ESRGAN_tile\": 192,\r\n \"ESRGAN_tile_overlap\": 8,\r\n \"realesrgan_enabled_models\": [\r\n \"R-ESRGAN 4x+\",\r\n \"R-ESRGAN 4x+ Anime6B\"\r\n ],\r\n \"upscaler_for_img2img\": null,\r\n \"face_restoration\": false,\r\n \"face_restoration_model\": \"CodeFormer\",\r\n \"code_former_weight\": 0.5,\r\n \"face_restoration_unload\": false,\r\n \"auto_launch_browser\": \"Local\",\r\n \"enable_console_prompts\": false,\r\n \"show_warnings\": false,\r\n \"show_gradio_deprecation_warnings\": true,\r\n \"memmon_poll_rate\": 8,\r\n \"samples_log_stdout\": false,\r\n \"multiple_tqdm\": true,\r\n \"print_hypernet_extra\": false,\r\n \"list_hidden_files\": true,\r\n \"disable_mmap_load_safetensors\": false,\r\n \"hide_ldm_prints\": true,\r\n \"dump_stacks_on_signal\": false,\r\n \"api_enable_requests\": true,\r\n \"api_forbid_local_requests\": true,\r\n \"api_useragent\": \"\",\r\n \"unload_models_when_training\": false,\r\n \"pin_memory\": false,\r\n \"save_optimizer_state\": false,\r\n \"save_training_settings_to_txt\": true,\r\n \"dataset_filename_word_regex\": \"\",\r\n \"dataset_filename_join_string\": \" \",\r\n \"training_image_repeats_per_epoch\": 1,\r\n \"training_write_csv_every\": 500,\r\n \"training_xattention_optimizations\": false,\r\n \"training_enable_tensorboard\": false,\r\n \"training_tensorboard_save_images\": false,\r\n \"training_tensorboard_flush_every\": 120,\r\n \"sd_model_checkpoint\": \"AOM3A1B_orangemixs.safetensors [5493a0ec49]\",\r\n \"sd_checkpoints_limit\": 1,\r\n \"sd_checkpoints_keep_in_cpu\": true,\r\n \"sd_checkpoint_cache\": 0,\r\n \"sd_unet\": \"Automatic\",\r\n \"enable_quantization\": false,\r\n \"enable_emphasis\": true,\r\n \"enable_batch_seeds\": true,\r\n \"comma_padding_backtrack\": 20,\r\n \"CLIP_stop_at_last_layers\": 1,\r\n \"upcast_attn\": true,\r\n \"randn_source\": \"GPU\",\r\n \"tiling\": false,\r\n \"hires_fix_refiner_pass\": \"second pass\",\r\n \"sdxl_crop_top\": 0,\r\n \"sdxl_crop_left\": 0,\r\n \"sdxl_refiner_low_aesthetic_score\": 2.5,\r\n \"sdxl_refiner_high_aesthetic_score\": 6.0,\r\n \"sd_vae_checkpoint_cache\": 1,\r\n \"sd_vae\": \"orangemix.vae.pt\",\r\n \"sd_vae_overrides_per_model_preferences\": true,\r\n \"auto_vae_precision\": true,\r\n \"sd_vae_encode_method\": \"Full\",\r\n \"sd_vae_decode_method\": \"Full\",\r\n \"inpainting_mask_weight\": 1.0,\r\n \"initial_noise_multiplier\": 1.0,\r\n \"img2img_extra_noise\": 0.0,\r\n \"img2img_color_correction\": false,\r\n \"img2img_fix_steps\": false,\r\n \"img2img_background_color\": \"#ffffff\",\r\n \"img2img_editor_height\": 720,\r\n \"img2img_sketch_default_brush_color\": \"#ffffff\",\r\n \"img2img_inpaint_mask_brush_color\": \"#ffffff\",\r\n \"img2img_inpaint_sketch_default_brush_color\": \"#ffffff\",\r\n \"return_mask\": false,\r\n \"return_mask_composite\": false,\r\n \"img2img_batch_show_results_limit\": 32,\r\n \"cross_attention_optimization\": \"Automatic\",\r\n \"s_min_uncond\": 0.0,\r\n \"token_merging_ratio\": 0.0,\r\n \"token_merging_ratio_img2img\": 0.0,\r\n \"token_merging_ratio_hr\": 0.0,\r\n \"pad_cond_uncond\": false,\r\n \"persistent_cond_cache\": true,\r\n \"batch_cond_uncond\": true,\r\n \"use_old_emphasis_implementation\": false,\r\n \"use_old_karras_scheduler_sigmas\": false,\r\n \"no_dpmpp_sde_batch_determinism\": false,\r\n \"use_old_hires_fix_width_height\": false,\r\n \"dont_fix_second_order_samplers_schedule\": false,\r\n \"hires_fix_use_firstpass_conds\": false,\r\n \"use_old_scheduling\": false,\r\n \"interrogate_keep_models_in_memory\": false,\r\n \"interrogate_return_ranks\": false,\r\n \"interrogate_clip_num_beams\": 1,\r\n \"interrogate_clip_min_length\": 24,\r\n \"interrogate_clip_max_length\": 48,\r\n \"interrogate_clip_dict_limit\": 1500,\r\n \"interrogate_clip_skip_categories\": [],\r\n \"interrogate_deepbooru_score_threshold\": 0.5,\r\n \"deepbooru_sort_alpha\": true,\r\n \"deepbooru_use_spaces\": true,\r\n \"deepbooru_escape\": true,\r\n \"deepbooru_filter_tags\": \"\",\r\n \"extra_networks_show_hidden_directories\": true,\r\n \"extra_networks_dir_button_function\": false,\r\n \"extra_networks_hidden_models\": \"When searched\",\r\n \"extra_networks_default_multiplier\": 1.0,\r\n \"extra_networks_card_width\": 0,\r\n \"extra_networks_card_height\": 0,\r\n \"extra_networks_card_text_scale\": 1.0,\r\n \"extra_networks_card_show_desc\": true,\r\n \"extra_networks_card_order_field\": \"Path\",\r\n \"extra_networks_card_order\": \"Ascending\",\r\n \"extra_networks_add_text_separator\": \" \",\r\n \"ui_extra_networks_tab_reorder\": \"\",\r\n \"textual_inversion_print_at_load\": false,\r\n \"textual_inversion_add_hashes_to_infotext\": true,\r\n \"sd_hypernetwork\": \"None\",\r\n \"keyedit_precision_attention\": 0.1,\r\n \"keyedit_precision_extra\": 0.05,\r\n \"keyedit_delimiters\": \".,\\\\/!?%^*;:{}=`~() \",\r\n \"keyedit_delimiters_whitespace\": [\r\n \"Tab\",\r\n \"Carriage Return\",\r\n \"Line Feed\"\r\n ],\r\n \"disable_token_counters\": false,\r\n \"return_grid\": true,\r\n \"do_not_show_images\": false,\r\n \"js_modal_lightbox\": true,\r\n \"js_modal_lightbox_initially_zoomed\": true,\r\n \"js_modal_lightbox_gamepad\": false,\r\n \"js_modal_lightbox_gamepad_repeat\": 250,\r\n \"gallery_height\": \"\",\r\n \"compact_prompt_box\": false,\r\n \"samplers_in_dropdown\": true,\r\n \"dimensions_and_batch_together\": true,\r\n \"sd_checkpoint_dropdown_use_short\": false,\r\n \"hires_fix_show_sampler\": false,\r\n \"hires_fix_show_prompts\": false,\r\n \"txt2img_settings_accordion\": false,\r\n \"img2img_settings_accordion\": false,\r\n \"localization\": \"None\",\r\n \"quicksettings_list\": [\r\n \"sd_model_checkpoint\"\r\n ],\r\n \"ui_tab_order\": [],\r\n \"hidden_tabs\": [],\r\n \"ui_reorder_list\": [],\r\n \"gradio_theme\": \"Default\",\r\n \"gradio_themes_cache\": true,\r\n \"show_progress_in_title\": true,\r\n \"send_seed\": true,\r\n \"send_size\": true,\r\n \"enable_pnginfo\": true,\r\n \"save_txt\": false,\r\n \"add_model_name_to_info\": true,\r\n \"add_model_hash_to_info\": true,\r\n \"add_vae_name_to_info\": true,\r\n \"add_vae_hash_to_info\": true,\r\n \"add_user_name_to_info\": false,\r\n \"add_version_to_infotext\": true,\r\n \"disable_weights_auto_swap\": true,\r\n \"infotext_skip_pasting\": [],\r\n \"infotext_styles\": \"Apply if any\",\r\n \"show_progressbar\": true,\r\n \"live_previews_enable\": false,\r\n \"live_previews_image_format\": \"png\",\r\n \"show_progress_grid\": true,\r\n \"show_progress_every_n_steps\": 5,\r\n \"show_progress_type\": \"Approx NN\",\r\n \"live_preview_allow_lowvram_full\": false,\r\n \"live_preview_content\": \"Prompt\",\r\n \"live_preview_refresh_period\": 300.0,\r\n \"live_preview_fast_interrupt\": false,\r\n \"hide_samplers\": [],\r\n \"eta_ddim\": 0.0,\r\n \"eta_ancestral\": 1.0,\r\n \"ddim_discretize\": \"uniform\",\r\n \"s_churn\": 0.0,\r\n \"s_tmin\": 0.0,\r\n \"s_tmax\": 0.0,\r\n \"s_noise\": 1.0,\r\n \"k_sched_type\": \"Automatic\",\r\n \"sigma_min\": 0.0,\r\n \"sigma_max\": 0.0,\r\n \"rho\": 0.0,\r\n \"eta_noise_seed_delta\": 0,\r\n \"always_discard_next_to_last_sigma\": false,\r\n \"sgm_noise_multiplier\": false,\r\n \"uni_pc_variant\": \"bh1\",\r\n \"uni_pc_skip_type\": \"time_uniform\",\r\n \"uni_pc_order\": 3,\r\n \"uni_pc_lower_order_final\": true,\r\n \"postprocessing_enable_in_main_ui\": [],\r\n \"postprocessing_operation_order\": [],\r\n \"upscaling_max_images_in_cache\": 5,\r\n \"postprocessing_existing_caption_action\": \"Ignore\",\r\n \"disabled_extensions\": [],\r\n \"disable_all_extensions\": \"none\",\r\n \"restore_config_state_file\": \"\",\r\n \"sd_checkpoint_hash\": \"5493a0ec491f5961dbdc1c861404088a6ae9bd4007f6a3a7c5dee8789cdc1361\",\r\n \"ldsr_steps\": 100,\r\n \"ldsr_cached\": false,\r\n \"SCUNET_tile\": 256,\r\n \"SCUNET_tile_overlap\": 8,\r\n \"SWIN_tile\": 192,\r\n \"SWIN_tile_overlap\": 8,\r\n \"SWIN_torch_compile\": false,\r\n \"hypertile_enable_unet\": false,\r\n \"hypertile_enable_unet_secondpass\": false,\r\n \"hypertile_max_depth_unet\": 3,\r\n \"hypertile_max_tile_unet\": 256,\r\n \"hypertile_swap_size_unet\": 3,\r\n \"hypertile_enable_vae\": false,\r\n \"hypertile_max_depth_vae\": 3,\r\n \"hypertile_max_tile_vae\": 128,\r\n \"hypertile_swap_size_vae\": 3,\r\n \"control_net_detectedmap_dir\": \"detected_maps\",\r\n \"control_net_models_path\": \"\",\r\n \"control_net_modules_path\": \"\",\r\n \"control_net_unit_count\": 3,\r\n \"control_net_model_cache_size\": 1,\r\n \"control_net_inpaint_blur_sigma\": 7,\r\n \"control_net_no_high_res_fix\": false,\r\n \"control_net_no_detectmap\": false,\r\n \"control_net_detectmap_autosaving\": false,\r\n \"control_net_allow_script_control\": false,\r\n \"control_net_sync_field_args\": true,\r\n \"controlnet_show_batch_images_in_ui\": false,\r\n \"controlnet_increment_seed_during_batch\": false,\r\n \"controlnet_disable_openpose_edit\": false,\r\n \"controlnet_ignore_noninpaint_mask\": false,\r\n \"lora_functional\": false,\r\n \"sd_lora\": \"None\",\r\n \"lora_preferred_name\": \"Alias from file\",\r\n \"lora_add_hashes_to_infotext\": true,\r\n \"lora_show_all\": false,\r\n \"lora_hide_unknown_for_versions\": [],\r\n \"lora_in_memory_limit\": 0,\r\n \"extra_options_txt2img\": [],\r\n \"extra_options_img2img\": [],\r\n \"extra_options_cols\": 1,\r\n \"extra_options_accordion\": false,\r\n \"canvas_hotkey_zoom\": \"Alt\",\r\n \"canvas_hotkey_adjust\": \"Ctrl\",\r\n \"canvas_hotkey_move\": \"F\",\r\n \"canvas_hotkey_fullscreen\": \"S\",\r\n \"canvas_hotkey_reset\": \"R\",\r\n \"canvas_hotkey_overlap\": \"O\",\r\n \"canvas_show_tooltip\": true,\r\n \"canvas_auto_expand\": true,\r\n \"canvas_blur_prompt\": false,\r\n \"canvas_disabled_functions\": [\r\n \"Overlap\"\r\n ]\r\n },\r\n \"Startup\": {\r\n \"total\": 11.257086753845215,\r\n \"records\": {\r\n \"initial startup\": 0.02352619171142578,\r\n \"prepare environment/checks\": 3.457069396972656e-05,\r\n \"prepare environment/git version info\": 0.009780406951904297,\r\n \"prepare environment/torch GPU test\": 2.7273693084716797,\r\n \"prepare environment/clone repositores\": 0.038356781005859375,\r\n \"prepare environment/run extensions installers/sd-webui-controlnet\": 0.14071893692016602,\r\n \"prepare environment/run extensions installers/ultimate-upscale-for-automatic1111\": 2.288818359375e-05,\r\n \"prepare environment/run extensions installers/clip-interrogator-ext\": 2.8869497776031494,\r\n \"prepare environment/run extensions installers/latent-upscale\": 5.626678466796875e-05,\r\n \"prepare environment/run extensions installers\": 3.0277533531188965,\r\n \"prepare environment\": 5.820652484893799,\r\n \"launcher\": 0.0008344650268554688,\r\n \"import torch\": 2.0337331295013428,\r\n \"import gradio\": 0.6256029605865479,\r\n \"setup paths\": 0.9430902004241943,\r\n \"import ldm\": 0.0025310516357421875,\r\n \"import sgm\": 2.384185791015625e-06,\r\n \"initialize shared\": 0.047745466232299805,\r\n \"other imports\": 0.5719733238220215,\r\n \"opts onchange\": 0.0002732276916503906,\r\n \"setup SD model\": 0.0003185272216796875,\r\n \"setup codeformer\": 0.07199668884277344,\r\n \"setup gfpgan\": 0.009232521057128906,\r\n \"set samplers\": 2.8371810913085938e-05,\r\n \"list extensions\": 0.0010488033294677734,\r\n \"restore config state file\": 5.4836273193359375e-06,\r\n \"list SD models\": 0.004712820053100586,\r\n \"list localizations\": 0.0001246929168701172,\r\n \"load scripts/custom_code.py\": 0.001154184341430664,\r\n \"load scripts/img2imgalt.py\": 0.0002789497375488281,\r\n \"load scripts/loopback.py\": 0.0001888275146484375,\r\n \"load scripts/outpainting_mk_2.py\": 0.0002484321594238281,\r\n \"load scripts/poor_mans_outpainting.py\": 0.0001766681671142578,\r\n \"load scripts/postprocessing_caption.py\": 0.0001506805419921875,\r\n \"load scripts/postprocessing_codeformer.py\": 0.00015020370483398438,\r\n \"load scripts/postprocessing_create_flipped_copies.py\": 0.00014519691467285156,\r\n \"load scripts/postprocessing_focal_crop.py\": 0.00043463706970214844,\r\n \"load scripts/postprocessing_gfpgan.py\": 0.00014495849609375,\r\n \"load scripts/postprocessing_split_oversized.py\": 0.00015592575073242188,\r\n \"load scripts/postprocessing_upscale.py\": 0.00021982192993164062,\r\n \"load scripts/processing_autosized_crop.py\": 0.0001621246337890625,\r\n \"load scripts/prompt_matrix.py\": 0.0001780986785888672,\r\n \"load scripts/prompts_from_file.py\": 0.0001876354217529297,\r\n \"load scripts/sd_upscale.py\": 0.00016450881958007812,\r\n \"load scripts/xyz_grid.py\": 0.0010995864868164062,\r\n \"load scripts/ldsr_model.py\": 0.11085081100463867,\r\n \"load scripts/lora_script.py\": 0.05980086326599121,\r\n \"load scripts/scunet_model.py\": 0.011086463928222656,\r\n \"load scripts/swinir_model.py\": 0.010489225387573242,\r\n \"load scripts/hotkey_config.py\": 0.0001678466796875,\r\n \"load scripts/extra_options_section.py\": 0.00020551681518554688,\r\n \"load scripts/hypertile_script.py\": 0.019654512405395508,\r\n \"load scripts/hypertile_xyz.py\": 8.058547973632812e-05,\r\n \"load scripts/clip_interrogator_ext.py\": 0.02592325210571289,\r\n \"load scripts/latent_upscale.py\": 0.0007441043853759766,\r\n \"load scripts/adapter.py\": 0.0003275871276855469,\r\n \"load scripts/api.py\": 0.12074923515319824,\r\n \"load scripts/batch_hijack.py\": 0.0005114078521728516,\r\n \"load scripts/cldm.py\": 0.00022983551025390625,\r\n \"load scripts/controlmodel_ipadapter.py\": 0.00032711029052734375,\r\n \"load scripts/controlnet.py\": 0.0494229793548584,\r\n \"load scripts/controlnet_diffusers.py\": 0.0001556873321533203,\r\n \"load scripts/controlnet_lllite.py\": 0.0001430511474609375,\r\n \"load scripts/controlnet_lora.py\": 0.00012731552124023438,\r\n \"load scripts/controlnet_model_guess.py\": 0.00011944770812988281,\r\n \"load scripts/controlnet_version.py\": 0.0001239776611328125,\r\n \"load scripts/enums.py\": 0.0003447532653808594,\r\n \"load scripts/external_code.py\": 6.246566772460938e-05,\r\n \"load scripts/global_state.py\": 0.0003178119659423828,\r\n \"load scripts/hook.py\": 0.0002903938293457031,\r\n \"load scripts/infotext.py\": 9.560585021972656e-05,\r\n \"load scripts/logging.py\": 0.00016260147094726562,\r\n \"load scripts/lvminthin.py\": 0.0001952648162841797,\r\n \"load scripts/movie2movie.py\": 0.00022029876708984375,\r\n \"load scripts/processor.py\": 0.00023818016052246094,\r\n \"load scripts/utils.py\": 0.00011324882507324219,\r\n \"load scripts/xyz_grid_support.py\": 0.0003902912139892578,\r\n \"load scripts/ultimate-upscale.py\": 0.00045228004455566406,\r\n \"load scripts/refiner.py\": 0.00011444091796875,\r\n \"load scripts/seed.py\": 0.00012302398681640625,\r\n \"load scripts\": 0.41962695121765137,\r\n \"load upscalers\": 0.001577138900756836,\r\n \"refresh VAE\": 0.0006160736083984375,\r\n \"refresh textual inversion templates\": 2.86102294921875e-05,\r\n \"scripts list_optimizers\": 0.00027680397033691406,\r\n \"scripts list_unets\": 4.76837158203125e-06,\r\n \"reload hypernetworks\": 0.0027685165405273438,\r\n \"initialize extra networks\": 0.004837512969970703,\r\n \"scripts before_ui_callback\": 0.00041604042053222656,\r\n \"create ui\": 0.4426920413970947,\r\n \"gradio launch\": 0.23865938186645508,\r\n \"add APIs\": 0.003912210464477539,\r\n \"app_started_callback/lora_script.py\": 0.0001537799835205078,\r\n \"app_started_callback/clip_interrogator_ext.py\": 0.0003566741943359375,\r\n \"app_started_callback/api.py\": 0.0010819435119628906,\r\n \"app_started_callback\": 0.001596689224243164\r\n }\r\n },\r\n \"Packages\": [\r\n \"absl-py==2.0.0\",\r\n \"accelerate==0.21.0\",\r\n \"addict==2.4.0\",\r\n \"aenum==3.1.15\",\r\n \"aiofiles==23.2.1\",\r\n \"aiohttp==3.9.1\",\r\n \"aiosignal==1.3.1\",\r\n \"altair==5.2.0\",\r\n \"antlr4-python3-runtime==4.9.3\",\r\n \"anyio==3.7.1\",\r\n \"attrs==23.1.0\",\r\n \"basicsr==1.4.2\",\r\n \"beautifulsoup4==4.12.2\",\r\n \"blendmodes==2022\",\r\n \"boltons==23.1.1\",\r\n \"cachetools==5.3.2\",\r\n \"certifi==2022.12.7\",\r\n \"cffi==1.16.0\",\r\n \"charset-normalizer==2.1.1\",\r\n \"clean-fid==0.1.35\",\r\n \"click==8.1.7\",\r\n \"clip-interrogator==0.6.0\",\r\n \"clip==1.0\",\r\n \"contourpy==1.2.0\",\r\n \"cssselect2==0.7.0\",\r\n \"cycler==0.12.1\",\r\n \"deprecation==2.1.0\",\r\n \"einops==0.4.1\",\r\n \"facexlib==0.3.0\",\r\n \"fastapi==0.94.0\",\r\n \"ffmpy==0.3.1\",\r\n \"filelock==3.9.0\",\r\n \"filterpy==1.4.5\",\r\n \"flatbuffers==23.5.26\",\r\n \"fonttools==4.46.0\",\r\n \"frozenlist==1.4.0\",\r\n \"fsspec==2023.12.1\",\r\n \"ftfy==6.1.3\",\r\n \"future==0.18.3\",\r\n \"fvcore==0.1.5.post20221221\",\r\n \"gdown==4.7.1\",\r\n \"gfpgan==1.3.8\",\r\n \"gitdb==4.0.11\",\r\n \"gitpython==3.1.32\",\r\n \"google-auth-oauthlib==1.1.0\",\r\n \"google-auth==2.25.1\",\r\n \"gradio-client==0.5.0\",\r\n \"gradio==3.41.2\",\r\n \"grpcio==1.60.0\",\r\n \"h11==0.12.0\",\r\n \"httpcore==0.15.0\",\r\n \"httpx==0.24.1\",\r\n \"huggingface-hub==0.19.4\",\r\n \"idna==3.4\",\r\n \"imageio==2.33.0\",\r\n \"importlib-metadata==7.0.0\",\r\n \"importlib-resources==6.1.1\",\r\n \"inflection==0.5.1\",\r\n \"iopath==0.1.9\",\r\n \"jinja2==3.1.2\",\r\n \"jsonmerge==1.8.0\",\r\n \"jsonschema-specifications==2023.11.2\",\r\n \"jsonschema==4.20.0\",\r\n \"kiwisolver==1.4.5\",\r\n \"kornia==0.6.7\",\r\n \"lark==1.1.2\",\r\n \"lazy-loader==0.3\",\r\n \"lightning-utilities==0.10.0\",\r\n \"llvmlite==0.41.1\",\r\n \"lmdb==1.4.1\",\r\n \"lpips==0.1.4\",\r\n \"lxml==4.9.3\",\r\n \"markdown==3.5.1\",\r\n \"markupsafe==2.1.3\",\r\n \"matplotlib==3.8.2\",\r\n \"mediapipe==0.10.8\",\r\n \"mpmath==1.2.1\",\r\n \"multidict==6.0.4\",\r\n \"networkx==3.0rc1\",\r\n \"numba==0.58.1\",\r\n \"numpy==1.23.5\",\r\n \"oauthlib==3.2.2\",\r\n \"omegaconf==2.2.3\",\r\n \"open-clip-torch==2.20.0\",\r\n \"opencv-contrib-python==4.8.1.78\",\r\n \"opencv-python==4.8.1.78\",\r\n \"orjson==3.9.10\",\r\n \"packaging==23.2\",\r\n \"pandas==2.1.4\",\r\n \"piexif==1.1.3\",\r\n \"pillow==9.5.0\",\r\n \"pip==23.1.2\",\r\n \"platformdirs==4.1.0\",\r\n \"portalocker==2.8.2\",\r\n \"protobuf==3.20.0\",\r\n \"psutil==5.9.5\",\r\n \"pyasn1-modules==0.3.0\",\r\n \"pyasn1==0.5.1\",\r\n \"pycparser==2.21\",\r\n \"pydantic==1.10.13\",\r\n \"pydub==0.25.1\",\r\n \"pyparsing==3.1.1\",\r\n \"pysocks==1.7.1\",\r\n \"python-dateutil==2.8.2\",\r\n \"python-multipart==0.0.6\",\r\n \"pytorch-lightning==1.9.4\",\r\n \"pytorch-triton-rocm==2.1.0+dafe145982\",\r\n \"pytz==2023.3.post1\",\r\n \"pywavelets==1.5.0\",\r\n \"pyyaml==6.0.1\",\r\n \"realesrgan==0.3.0\",\r\n \"referencing==0.32.0\",\r\n \"regex==2023.10.3\",\r\n \"reportlab==4.0.7\",\r\n \"requests-oauthlib==1.3.1\",\r\n \"requests==2.28.1\",\r\n \"resize-right==0.0.2\",\r\n \"rpds-py==0.13.2\",\r\n \"rsa==4.9\",\r\n \"safetensors==0.3.1\",\r\n \"scikit-image==0.21.0\",\r\n \"scipy==1.11.4\",\r\n \"semantic-version==2.10.0\",\r\n \"sentencepiece==0.1.99\",\r\n \"setuptools==65.5.0\",\r\n \"six==1.16.0\",\r\n \"smmap==5.0.1\",\r\n \"sniffio==1.3.0\",\r\n \"sounddevice==0.4.6\",\r\n \"soupsieve==2.5\",\r\n \"starlette==0.26.1\",\r\n \"svglib==1.5.1\",\r\n \"sympy==1.11.1\",\r\n \"tabulate==0.9.0\",\r\n \"tb-nightly==2.16.0a20231208\",\r\n \"tensorboard-data-server==0.7.2\",\r\n \"termcolor==2.4.0\",\r\n \"tf-keras-nightly==2.16.0.dev2023120810\",\r\n \"tifffile==2023.9.26\",\r\n \"timm==0.9.2\",\r\n \"tinycss2==1.2.1\",\r\n \"tokenizers==0.13.3\",\r\n \"tomesd==0.1.3\",\r\n \"tomli==2.0.1\",\r\n \"toolz==0.12.0\",\r\n \"torch==2.2.0.dev20231208+rocm5.6\",\r\n \"torchdiffeq==0.2.3\",\r\n \"torchmetrics==1.2.1\",\r\n \"torchsde==0.2.6\",\r\n \"torchvision==0.17.0.dev20231208+rocm5.6\",\r\n \"tqdm==4.66.1\",\r\n \"trampoline==0.1.2\",\r\n \"transformers==4.30.2\",\r\n \"typing-extensions==4.8.0\",\r\n \"tzdata==2023.3\",\r\n \"urllib3==1.26.13\",\r\n \"uvicorn==0.24.0.post1\",\r\n \"wcwidth==0.2.12\",\r\n \"webencodings==0.5.1\",\r\n \"websockets==11.0.3\",\r\n \"werkzeug==3.0.1\",\r\n \"yacs==0.1.8\",\r\n \"yapf==0.40.2\",\r\n \"yarl==1.9.4\",\r\n \"zipp==3.17.0\"\r\n ]\r\n}\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Console logs\n\n```Shell\n\u276f ./webui.sh (base) \r\n\r\n################################################################\r\nInstall script for stable-diffusion + Web UI\r\nTested on Debian 11 (Bullseye)\r\n################################################################\r\n\r\n################################################################\r\nRunning on ciel user\r\n################################################################\r\n\r\n################################################################\r\nCreate and activate python venv\r\n################################################################\r\n\r\n################################################################\r\nLaunching launch.py...\r\n################################################################\r\nUsing TCMalloc: libtcmalloc_minimal.so.4\r\nPython 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0]\r\nVersion: v1.7.0-RC-5-gf92d6149\r\nCommit hash: f92d61497a426a19818625c3ccdaae9beeb82b31\r\nLaunching Web UI with arguments: \r\nno module 'xformers'. Processing without...\r\nno module 'xformers'. Processing without...\r\nNo module 'xformers'. Proceeding without it.\r\n2023-12-09 17:08:09,876 - ControlNet - INFO - ControlNet v1.1.422\r\nControlNet preprocessor location: /home/ciel/stable-diffusion/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads\r\n2023-12-09 17:08:09,921 - ControlNet - INFO - ControlNet v1.1.422\r\nLoading weights [5493a0ec49] from /home/ciel/stable-diffusion/stable-diffusion-webui/models/Stable-diffusion/AOM3A1B_orangemixs.safetensors\r\nRunning on local URL: http://127.0.0.1:7860\r\n\r\nTo create a public link, set `share=True` in `launch()`.\r\nCreating model from config: /home/ciel/stable-diffusion/stable-diffusion-webui/configs/v1-inference.yaml\r\nStartup time: 8.9s (prepare environment: 4.0s, import torch: 2.0s, import gradio: 0.5s, setup paths: 0.8s, other imports: 0.5s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 0.2s).\r\nLoading VAE weights specified in settings: /home/ciel/stable-diffusion/stable-diffusion-webui/models/VAE/orangemix.vae.pt\r\nApplying attention optimization: Doggettx... done.\r\nModel loaded in 2.6s (load weights from disk: 0.6s, create model: 0.2s, apply weights to model: 1.4s, load VAE: 0.2s, calculate empty prompt: 0.1s).\r\nTraceback (most recent call last):\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/routes.py\", line 488, in run_predict\r\n output = await app.get_blocks().process_api(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py\", line 1431, in process_api\r\n result = await self.call_function(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py\", line 1103, in call_function\r\n prediction = await anyio.to_thread.run_sync(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n ^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/utils.py\", line 707, in wrapper\r\n response = f(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/modules/ui_prompt_styles.py\", line 27, in save_style\r\n shared.prompt_styles.save_styles(shared.styles_filename)\r\n File \"/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py\", line 212, in save_styles\r\n style_paths.remove(\"do_not_save\")\r\nKeyError: 'do_not_save'\n```\n\n\n### Additional information\n\nI'm running dev branch due to the Navi3 bug, checking out master after launch seems to result in the same issue, but it could have just been jit-ed, didn't test very in-depth", "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14276", "file_loc": "{'base_commit': 'f92d61497a426a19818625c3ccdaae9beeb82b31', 'files': [{'path': 'modules/styles.py', 'status': 'modified', 'Loc': {\"('StyleDatabase', '__init__', 95)\": {'mod': [101, 102, 103, 104]}, \"('StyleDatabase', None, 94)\": {'mod': [158, 159, 160, 161]}, \"('StyleDatabase', 'get_style_paths', 158)\": {'mod': [175, 177]}, \"('StyleDatabase', 'save_styles', 195)\": {'mod': [199, 200, 201, 202, 204, 205, 206, 207, 208, 209, 211, 212]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["modules/styles.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "home-assistant", "repo_name": "core", "base_commit": "c3e9c1a7e8fdc949b8e638d79ab476507ff92f18", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/60067", "iss_label": "integration: environment_canada\nby-code-owner", "title": "Environment Canada (EC) radar integration slowing Environment Canada servers", "body": "### The problem\r\n\r\nThe `config_flow` change to the EC integration did not change the way the underlying radar retrieval works, but did enable radar for everyone. As a result the EC servers are getting far too many requests. We (the codeowners) have been working with EC to diagnose this issue and understand their concerns. \r\n\r\nWe are doing two things (PR is in progress). Caching requests to the EC servers. Work so far shows that through caching we can reduce the number of requests by over 90%. This fix is in the integration dependency library.\r\n\r\nSecond, we are creating the radar (camera) entity with `_attr_entity_registry_enabled_default = False` so that new radar entities are disabled by default. Many people use the integration for forecast only.\r\n\r\nLast, EC is putting a policy in place such that User Agent needs to be filled in to represent the calling library.\r\n\r\n### What version of Home Assistant Core has the issue?\r\n\r\n2021.12.0.dev0\r\n\r\n### What was the last working version of Home Assistant Core?\r\n\r\n_No response_\r\n\r\n### What type of installation are you running?\r\n\r\nHome Assistant Core\r\n\r\n### Integration causing the issue\r\n\r\nEnvironment Canada\r\n\r\n### Link to integration documentation on our website\r\n\r\nhttps://www.home-assistant.io/integrations/environment_canada/\r\n\r\n### Example YAML snippet\r\n\r\n_No response_\r\n\r\n### Anything in the logs that might be useful for us?\r\n\r\n_No response_\r\n\r\n### Additional information\r\n\r\nQuote from one of the email exchanges with EC:\r\n\r\n> What we observed is 1350 unique IP addresses using this code which made 23.5 million requests over 5 days.\r\n\r\nIn order to respond to EC as quickly as possible we are asking for consideration to release the PR, when available, in the next dot release.", "pr_html_url": "https://github.com/home-assistant/core/pull/60087", "file_loc": "{'base_commit': 'c3e9c1a7e8fdc949b8e638d79ab476507ff92f18', 'files': [{'path': 'homeassistant/components/environment_canada/camera.py', 'status': 'modified', 'Loc': {\"('ECCamera', '__init__', 49)\": {'add': [57]}}}, {'path': 'homeassistant/components/environment_canada/manifest.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'requirements_all.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [603]}}}, {'path': 'requirements_test_all.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [372]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["homeassistant/components/environment_canada/camera.py", "homeassistant/components/environment_canada/manifest.json"], "doc": [], "test": [], "config": ["requirements_all.txt", "requirements_test_all.txt"], "asset": []}} {"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "939539611f0cad12056f7be78ef6b2128b90b779", "iss_has_pr": 1, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/336", "iss_label": "bug\np2", "title": "Handle Nones in chunk.choices[0].delta", "body": "![WechatIMG434](https://github.com/abi/screenshot-to-code/assets/158557918/d2ddcd3e-f944-40cb-a74e-b54bec8938f4)\r\n\r\nThere is a successful request for the openai interface, but it seems that no code is generated.\r\n\r\nbackend-1 | ERROR: Exception in ASGI application\r\nbackend-1 | Traceback (most recent call last):\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py\", line 250, in run_asgi\r\nbackend-1 | result = await self.app(self.scope, self.asgi_receive, self.asgi_send)\r\nbackend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py\", line 84, in __call__\r\nbackend-1 | return await self.app(scope, receive, send)\r\nbackend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/fastapi/applications.py\", line 276, in __call__\r\nbackend-1 | await super().__call__(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/applications.py\", line 122, in __call__\r\nbackend-1 | await self.middleware_stack(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py\", line 149, in __call__\r\nbackend-1 | await self.app(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/middleware/cors.py\", line 75, in __call__\r\nbackend-1 | await self.app(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\nbackend-1 | raise exc\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\nbackend-1 | await self.app(scope, receive, sender)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py\", line 21, in __call__\r\nbackend-1 | raise e\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py\", line 18, in __call__\r\nbackend-1 | await self.app(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/routing.py\", line 718, in __call__\r\nbackend-1 | await route.handle(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/routing.py\", line 341, in handle\r\nbackend-1 | await self.app(scope, receive, send)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/starlette/routing.py\", line 82, in app\r\nbackend-1 | await func(session)\r\nbackend-1 | File \"/usr/local/lib/python3.12/site-packages/fastapi/routing.py\", line 289, in app\r\nbackend-1 | await dependant.call(**values)\r\nbackend-1 | File \"/app/routes/generate_code.py\", line 251, in stream_code\r\nbackend-1 | completion = await stream_openai_response(\r\nbackend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbackend-1 | File \"/app/llm.py\", line 62, in stream_openai_response\r\nbackend-1 | content = chunk.choices[0].delta.content or \"\"\r\nbackend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nbackend-1 | AttributeError: 'NoneType' object has no attribute 'content'\r\nbackend-1 | INFO: connection closed\r\n", "pr_html_url": "https://github.com/abi/screenshot-to-code/pull/341", "file_loc": "{'base_commit': '939539611f0cad12056f7be78ef6b2128b90b779', 'files': [{'path': 'backend/llm.py', 'status': 'modified', 'Loc': {\"(None, 'stream_openai_response', 32)\": {'mod': [62, 63, 64]}}}, {'path': 'frontend/package.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49]}}}, {'path': 'frontend/src/App.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [381]}}}, {'path': 'frontend/yarn.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5644, 5939]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["backend/llm.py", "frontend/src/App.tsx", "frontend/package.json"], "doc": [], "test": [], "config": ["frontend/yarn.lock"], "asset": []}} {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "bf895eb656dee9084273cd36395828bd06aa231d", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/6", "iss_label": "enhancement\ngood first issue\nAPI costs", "title": "Make Auto-GPT aware of it's running cost", "body": "Auto-GPT is expensive to run due to GPT-4's API cost.\n\nWe could experiment with making it aware of this fact, by tracking tokens as they are used and converting to a dollar cost. \n\nThis could also be displayed to the user to help them be more aware of exactly how much they are spending.", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/762", "file_loc": "{'base_commit': 'bf895eb656dee9084273cd36395828bd06aa231d', 'files': [{'path': 'autogpt/chat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, \"(None, 'chat_with_ai', 54)\": {'add': [135]}}}, {'path': 'autogpt/config/ai_config.py', 'status': 'modified', 'Loc': {\"('AIConfig', None, 21)\": {'add': [28]}, \"('AIConfig', '__init__', 31)\": {'add': [40, 48], 'mod': [32]}, \"('AIConfig', 'load', 53)\": {'add': [75], 'mod': [55, 77]}, \"('AIConfig', 'save', 79)\": {'add': [94]}, \"('AIConfig', 'construct_full_prompt', 99)\": {'add': [149], 'mod': [110]}}}, {'path': 'autogpt/llm_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}, \"(None, 'create_chat_completion', 56)\": {'mod': [99, 107]}, \"(None, 'create_embedding_with_ada', 156)\": {'mod': [162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172]}}}, {'path': 'autogpt/memory/base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, \"(None, 'get_ada_embedding', 11)\": {'mod': [13, 14, 15, 16, 17, 18, 19, 20, 21]}}}, {'path': 'autogpt/prompts/prompt.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, \"(None, 'construct_main_ai_config', 78)\": {'add': [88, 100, 109]}}}, {'path': 'autogpt/setup.py', 'status': 'modified', 'Loc': {\"(None, 'generate_aiconfig_automatic', 139)\": {'add': [194], 'mod': [196]}, \"(None, 'generate_aiconfig_manual', 70)\": {'mod': [136]}}}, {'path': 'tests/unit/test_commands.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 10]}, \"(None, 'test_make_agent', 11)\": {'mod': [17, 20]}}}, {'path': 'tests/unit/test_setup.py', 'status': 'modified', 'Loc': {\"('TestAutoGPT', 'test_generate_aiconfig_automatic_fallback', 39)\": {'add': [46]}, \"('TestAutoGPT', 'test_prompt_user_manual_mode', 57)\": {'add': [64]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["autogpt/chat.py", "autogpt/prompts/prompt.py", "autogpt/config/ai_config.py", "autogpt/memory/base.py", "autogpt/setup.py", "autogpt/llm_utils.py"], "doc": [], "test": ["tests/unit/test_commands.py", "tests/unit/test_setup.py"], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "3e01ce744a981d8f19ae77ec695005e7000f4703", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/5855", "iss_label": "bug", "title": "Generic extractor can crash if Brotli is not available", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Provide a description that is worded well enough to be understood\n\nTesting #5851 in a configuration where no Brotli decoder was available showed the crash in the log.\r\n\r\nThe problem is this extractor code:\r\nhttps://github.com/yt-dlp/yt-dlp/blob/1fc089143c79b02b8373ae1d785d5e3a68635d4d/yt_dlp/extractor/generic.py#L2306-L2318\r\n\r\nNormally there is a check for a supported Brotli encoder (using `SUPPORTED_ENCODINGS`). Specifying `*` in the `Accept-encoding` header bypasses that check.\r\n\r\nHowever, I don't think that `*` does what is wanted according to the comments in the above code. The code wants to get the resource with no decoding (because decoding in yt-dl[p] starts by reading the entire response), but `*` still allows the server to send a compressed response. What is wanted is the `identity` encoding which is the default if no other encoding is specified. Or, to re-cast the decoding process so that the whole response stream is not read before decoding, but that means creating stream decoding methods for Brotli and zlib.\r\n\r\nAlso, there could be a check for a supported encoding in `YoutubeDLHandler.http_response()`, perhaps synthesizing 416 or 406 id the server has sent an encoding that isn't supported, instead of the crash seen here.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', '-F', 'https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2022.11.11 [8b644025b] (source)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']\r\n[debug] Git HEAD: c73355510\r\n[debug] Python 3.9.15 (CPython i686 32bit) - Linux-4.4.0-210-generic-i686-with-glibc2.23 (OpenSSL 1.1.1s 1 Nov 2022, glibc 2.23)\r\n[debug] exe versions: ffmpeg 4.3, ffprobe 4.3\r\n[debug] Optional libraries: Cryptodome-3.11.0, certifi-2019.11.28, secretstorage-3.2.0, sqlite3-2.6.0\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1735 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.11.11, Current version: 2022.11.11\r\nyt-dlp is up to date (2022.11.11)\r\n[generic] Extracting URL: https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867\r\n[generic] cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867: Downloading webpage\r\nERROR: 'NoneType' object has no attribute 'decompress'\r\nTraceback (most recent call last):\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 1495, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 1571, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py\", line 680, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/generic.py\", line 2314, in _real_extract\r\n full_response = self._request_webpage(url, video_id, headers={\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py\", line 807, in _request_webpage\r\n return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 3719, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/utils.py\", line 1452, in http_response\r\n io.BytesIO(self.brotli(resp.read())), old_resp.headers, old_resp.url, old_resp.code)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/utils.py\", line 1389, in brotli\r\n return brotli.decompress(data)\r\nAttributeError: 'NoneType' object has no attribute 'decompress'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/3e01ce744a981d8f19ae77ec695005e7000f4703", "file_loc": "{'base_commit': '3e01ce744a981d8f19ae77ec695005e7000f4703', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {\"('GenericIE', None, 42)\": {'add': [2156]}, \"('GenericIE', '_real_extract', 2276)\": {'mod': [2315]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["yt_dlp/extractor/generic.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "3e01ce744a981d8f19ae77ec695005e7000f4703", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/5855", "iss_label": "bug", "title": "Generic extractor can crash if Brotli is not available", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Provide a description that is worded well enough to be understood\n\nTesting #5851 in a configuration where no Brotli decoder was available showed the crash in the log.\r\n\r\nThe problem is this extractor code:\r\nhttps://github.com/yt-dlp/yt-dlp/blob/1fc089143c79b02b8373ae1d785d5e3a68635d4d/yt_dlp/extractor/generic.py#L2306-L2318\r\n\r\nNormally there is a check for a supported Brotli encoder (using `SUPPORTED_ENCODINGS`). Specifying `*` in the `Accept-encoding` header bypasses that check.\r\n\r\nHowever, I don't think that `*` does what is wanted according to the comments in the above code. The code wants to get the resource with no decoding (because decoding in yt-dl[p] starts by reading the entire response), but `*` still allows the server to send a compressed response. What is wanted is the `identity` encoding which is the default if no other encoding is specified. Or, to re-cast the decoding process so that the whole response stream is not read before decoding, but that means creating stream decoding methods for Brotli and zlib.\r\n\r\nAlso, there could be a check for a supported encoding in `YoutubeDLHandler.http_response()`, perhaps synthesizing 416 or 406 id the server has sent an encoding that isn't supported, instead of the crash seen here.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', '-F', 'https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version 2022.11.11 [8b644025b] (source)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']\r\n[debug] Git HEAD: c73355510\r\n[debug] Python 3.9.15 (CPython i686 32bit) - Linux-4.4.0-210-generic-i686-with-glibc2.23 (OpenSSL 1.1.1s 1 Nov 2022, glibc 2.23)\r\n[debug] exe versions: ffmpeg 4.3, ffprobe 4.3\r\n[debug] Optional libraries: Cryptodome-3.11.0, certifi-2019.11.28, secretstorage-3.2.0, sqlite3-2.6.0\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1735 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.11.11, Current version: 2022.11.11\r\nyt-dlp is up to date (2022.11.11)\r\n[generic] Extracting URL: https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867\r\n[generic] cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867: Downloading webpage\r\nERROR: 'NoneType' object has no attribute 'decompress'\r\nTraceback (most recent call last):\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 1495, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 1571, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py\", line 680, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/generic.py\", line 2314, in _real_extract\r\n full_response = self._request_webpage(url, video_id, headers={\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py\", line 807, in _request_webpage\r\n return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py\", line 3719, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/utils.py\", line 1452, in http_response\r\n io.BytesIO(self.brotli(resp.read())), old_resp.headers, old_resp.url, old_resp.code)\r\n File \"/home/df/Documents/src/yt-dlp/yt_dlp/utils.py\", line 1389, in brotli\r\n return brotli.decompress(data)\r\nAttributeError: 'NoneType' object has no attribute 'decompress'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/3e01ce744a981d8f19ae77ec695005e7000f4703", "file_loc": "{'base_commit': '3e01ce744a981d8f19ae77ec695005e7000f4703', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {\"('GenericIE', None, 42)\": {'add': [2156]}, \"('GenericIE', '_real_extract', 2276)\": {'mod': [2315]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["yt_dlp/extractor/generic.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "ded7b37234e229d9bde0a9a506f7c65605803731", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/543", "iss_label": "", "title": "Lack of pre-compiled results in lost interest", "body": "so I know the first thing people are going to say is, this isn't an issue. However, it is. by not having a precompiled version to download over half the people that find their way to this GitHub are going to lose interest. Honestly, I'm one of them. I attempted to compile it but then I saw that I had to track down each module for this, yeah quickly drove me away from it. all I wanted to do was mess around and see what it can do. even if the results arent mind-blowing the concept interests me. but due to not having a ready to use executable I like many others I'm sure of, have decided it isn't even worth messing with. ", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/546", "file_loc": "{'base_commit': 'ded7b37234e229d9bde0a9a506f7c65605803731', 'files': [{'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [11]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["toolbox/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "96b5814de70ad2435b6db5f49b607b136921f701", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/26948", "iss_label": "Documentation", "title": "The copy button on install copies an extensive comman including env activation", "body": "### Describe the issue linked to the documentation\n\nhttps://scikit-learn.org/stable/install.html\r\n\r\nAbove link will lead you to the sklearn downlanding for link . \r\nwhen you link copy link button it will copy \r\n`python3 -m venv sklearn-venvpython -m venv sklearn-venvpython -m venv sklearn-venvsource sklearn-venv/bin/activatesource sklearn-venv/bin/activatesklearn-venv\\Scripts\\activatepip install -U scikit-learnpip install -U scikit-learnpip install -U scikit-learnpip3 install -U scikit-learnconda create -n sklearn-env -c conda-forge scikit-learnconda activate sklearn-env`\r\n\r\ninstead of `pip3 install -U scikit-learn`\r\n\r\nif this is the issue so please issue i want to create a pull request for it and tell in which file this issue reside\r\nThanks\n\n### Suggest a potential alternative/fix\n\nBy resoving above issue", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/27052", "file_loc": "{'base_commit': '96b5814de70ad2435b6db5f49b607b136921f701', 'files': [{'path': 'doc/install.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]}}}, {'path': 'doc/themes/scikit-learn-modern/static/css/theme.css', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1216, 1220, 1225, 1233, 1236, 1239, 1243, 1247], 'mod': [1208, 1209]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["doc/themes/scikit-learn-modern/static/css/theme.css"], "doc": ["doc/install.rst"], "test": [], "config": [], "asset": []}} {"organization": "keras-team", "repo_name": "keras", "base_commit": "49b9682b3570211c7d8f619f8538c08fd5d8bdad", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/10036", "iss_label": "", "title": "[API DESIGN REVIEW] sample weight in ImageDataGenerator.flow", "body": "https://docs.google.com/document/d/14anankKROhliJCpInQH-pITatdjO9UzSN6Iz0MwcDHw/edit?usp=sharing\r\n\r\nMakes it easy to use data augmentation when sample weights are available. ", "pr_html_url": "https://github.com/keras-team/keras/pull/10092", "file_loc": "{'base_commit': '49b9682b3570211c7d8f619f8538c08fd5d8bdad', 'files': [{'path': 'keras/preprocessing/image.py', 'status': 'modified', 'Loc': {\"('ImageDataGenerator', 'flow', 715)\": {'add': [734, 759], 'mod': [754]}, \"('NumpyArrayIterator', None, 1188)\": {'add': [1201]}, \"('NumpyArrayIterator', '__init__', 1216)\": {'add': [1241, 1278], 'mod': [1217, 1218]}, \"('NumpyArrayIterator', '_get_batches_of_transformed_samples', 1289)\": {'add': [1313]}, \"('ImageDataGenerator', None, 443)\": {'mod': [715]}}}, {'path': 'tests/keras/preprocessing/image_test.py', 'status': 'modified', 'Loc': {\"('TestImage', 'test_image_data_generator', 32)\": {'add': [64]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/keras/preprocessing/image_test.py", "keras/preprocessing/image.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "efb53aafdcaae058962c6189ddecb3dc62b02c31", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/6514", "iss_label": "enhancement", "title": "Migrate from setup.py to pyproject.toml", "body": "We should migrate to the modern declarative setuptools metadata approach as discussed in https://setuptools.pypa.io/en/latest/userguide/quickstart.html and https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html, but only after the 2.12 release.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/6547", "file_loc": "{'base_commit': 'efb53aafdcaae058962c6189ddecb3dc62b02c31', 'files': [{'path': '.bandit.yml', 'status': 'removed', 'Loc': {}}, {'path': '.bumpversion.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.coveragerc', 'status': 'removed', 'Loc': {}}, {'path': '.isort.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.pre-commit-config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}, {'path': 'MANIFEST.in', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [13]}}}, {'path': 'pylintrc', 'status': 'removed', 'Loc': {}}, {'path': 'pytest.ini', 'status': 'removed', 'Loc': {}}, {'path': 'setup.cfg', 'status': 'removed', 'Loc': {}}, {'path': 'setup.py', 'status': 'removed', 'Loc': {}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {\"('CrawlerProcessSubprocess', 'test_shutdown_forced', 890)\": {'mod': [902]}}}, {'path': 'tests/test_spiderloader/__init__.py', 'status': 'modified', 'Loc': {\"('SpiderLoaderTest', 'test_syntax_error_warning', 146)\": {'mod': [147, 148, 149]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [82]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/test_spiderloader/__init__.py", ".isort.cfg", ".coveragerc", "setup.cfg", "setup.py", ".bumpversion.cfg"], "doc": [], "test": ["tests/test_crawler.py"], "config": ["pytest.ini", ".pre-commit-config.yaml", "tox.ini", "pylintrc", ".bandit.yml", "MANIFEST.in"], "asset": []}} {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6e950dc9cacefd692dbd8987a3acd12a44b506f", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/5859", "iss_label": "question\nquestion-migrate", "title": "FastAPI==0.89.0 Cannot use `None` as a return type when `status_code` is set to 204 with `from __future__ import annotations`", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\nfrom __future__ import annotations \r\n\r\nfrom fastapi import FastAPI\r\n\r\napp = FastAPI()\r\n\r\n\r\n@app.get(\"/\", status_code=204)\r\ndef read_root() -> None:\r\n return {\"Hello\": \"World\"}\n```\n\n\n### Description\n\nIf we add:\r\n`from __future__ import annotations`\r\n\r\nIt changes the annotations structure and the response model is `NoneType` instead of `None`, which causes validation of the `statuc_code` vs `response_model` and raises an exception.\r\n\r\n```python\r\n ...\r\n File \".../site-packages/fastapi/routing.py\", line 635, in decorator\r\n self.add_api_route(\r\n File \".../site-packages/fastapi/routing.py\", line 574, in add_api_route\r\n route = route_class(\r\n File \".../site-packages/fastapi/routing.py\", line 398, in __init__\r\n assert is_body_allowed_for_status_code(\r\nAssertionError: Status code 204 must not have a response body\r\n```\r\n\r\nI am working on a fix for it right now.\n\n### Operating System\n\nmacOS\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.89.0\n\n### Python Version\n\n3.10\n\n### Additional Context\n\n_No response_", "pr_html_url": "https://github.com/fastapi/fastapi/pull/2246", "file_loc": "{'base_commit': 'c6e950dc9cacefd692dbd8987a3acd12a44b506f', 'files': [{'path': '.github/workflows/preview-docs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [".github/workflows/preview-docs.yml"], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "3938f81c1b4a5ee81d5bfc6563c17a225f7e5068", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/1330", "iss_label": "", "title": "Error after installing manim", "body": "I installed all manim & dependecies, but when I ran `python -m manim example_scenes.py OpeningManimExample`, I got the following error:\r\n`Traceback (most recent call last):\r\n File \"c:\\users\\jm\\anaconda3\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"c:\\users\\jm\\anaconda3\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manim.py\", line 5, in \r\n manimlib.main()\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\__init__.py\", line 9, in main\r\n scenes = manimlib.extract_scene.main(config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\extract_scene.py\", line 113, in main\r\n scenes = get_scenes_to_render(all_scene_classes, scene_config, config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\extract_scene.py\", line 74, in get_scenes_to_render\r\n scene = scene_class(**scene_config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\scene\\scene.py\", line 44, in __init__\r\n self.window = Window(self, **self.window_config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\window.py\", line 19, in __init__\r\n super().__init__(**kwargs)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\moderngl_window\\context\\pyglet\\window.py\", line 51, in __init__\r\n self._window = PygletWrapper(\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\window\\win32\\__init__.py\", line 134, in __init__\r\n super(Win32Window, self).__init__(*args, **kwargs)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\window\\__init__.py\", line 603, in __init__\r\n config = screen.get_best_config(config)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\canvas\\base.py\", line 194, in get_best_config\r\n raise window.NoSuchConfigException()\r\npyglet.window.NoSuchConfigException`.\r\nAny advice? And thank you", "code": null, "pr_html_url": "https://github.com/3b1b/manim/pull/1343", "commit_html_url": null, "file_loc": "{'base_commit': '3938f81c1b4a5ee81d5bfc6563c17a225f7e5068', 'files': [{'path': 'manimlib/window.py', 'status': 'modified', 'Loc': {\"('Window', None, 10)\": {'mod': [15]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["manimlib/window.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "3938f81c1b4a5ee81d5bfc6563c17a225f7e5068", "iss_html_url": "https://github.com/3b1b/manim/issues/1330", "iss_label": "", "title": "Error after installing manim", "body": "I installed all manim & dependecies, but when I ran `python -m manim example_scenes.py OpeningManimExample`, I got the following error:\r\n`Traceback (most recent call last):\r\n File \"c:\\users\\jm\\anaconda3\\lib\\runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"c:\\users\\jm\\anaconda3\\lib\\runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manim.py\", line 5, in \r\n manimlib.main()\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\__init__.py\", line 9, in main\r\n scenes = manimlib.extract_scene.main(config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\extract_scene.py\", line 113, in main\r\n scenes = get_scenes_to_render(all_scene_classes, scene_config, config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\extract_scene.py\", line 74, in get_scenes_to_render\r\n scene = scene_class(**scene_config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\scene\\scene.py\", line 44, in __init__\r\n self.window = Window(self, **self.window_config)\r\n File \"C:\\Users\\jm\\Documents\\work\\manim_new\\manim\\manimlib\\window.py\", line 19, in __init__\r\n super().__init__(**kwargs)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\moderngl_window\\context\\pyglet\\window.py\", line 51, in __init__\r\n self._window = PygletWrapper(\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\window\\win32\\__init__.py\", line 134, in __init__\r\n super(Win32Window, self).__init__(*args, **kwargs)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\window\\__init__.py\", line 603, in __init__\r\n config = screen.get_best_config(config)\r\n File \"C:\\Users\\jm\\Envs\\manim.new\\lib\\site-packages\\pyglet\\canvas\\base.py\", line 194, in get_best_config\r\n raise window.NoSuchConfigException()\r\npyglet.window.NoSuchConfigException`.\r\nAny advice? And thank you", "code": null, "pr_html_url": "https://github.com/3b1b/manim/pull/1343", "commit_html_url": null, "file_loc": "{'base_commit': '3938f81c1b4a5ee81d5bfc6563c17a225f7e5068', 'files': [{'path': 'manimlib/window.py', 'status': 'modified', 'Loc': {\"('Window', None, 10)\": {'mod': [15]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["manimlib/window.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "keras-team", "repo_name": "keras", "base_commit": "84b283e6200bcb051ed976782fbb2b123bf9b8fc", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/19793", "iss_label": "type:bug/performance", "title": "model.keras format much slower to load", "body": "Anyone experiencing unreasonably slow load times when loading a keras-format saved model? I have noticed this repeated when working in ipython, where simply instantiating a model via `Model.from_config` then calling `model.load_weights` is much (several factors) faster than loading a `model.keras` file.\r\n\r\nMy understanding is the keras format is simply a zip file with the config.json file and weights h5 (iirc) but weirdly enough, there's something not right going on while loading.", "pr_html_url": "https://github.com/keras-team/keras/pull/19852", "file_loc": "{'base_commit': '84b283e6200bcb051ed976782fbb2b123bf9b8fc', 'files': [{'path': 'keras/src/saving/saving_lib.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 34]}, \"(None, '_save_model_to_fileobj', 95)\": {'mod': [112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 127, 128, 129, 130, 131, 132, 133, 134, 135]}, \"(None, '_load_model_from_fileobj', 157)\": {'mod': [175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 186, 187, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204]}, \"(None, 'load_weights_only', 239)\": {'mod': [253, 254, 255]}}}, {'path': 'keras/src/saving/saving_lib_test.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [614]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["keras/src/saving/saving_lib_test.py", "keras/src/saving/saving_lib.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "ansible", "repo_name": "ansible", "base_commit": "4cdb266dac852859f695b0555cbe49e58343e69a", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/3539", "iss_label": "bug", "title": "Bug in Conditional Include", "body": "Hi,\n\nI know that when using conditionals on an include, 'All the tasks get evaluated, but the conditional is applied to each and every task'. However this breaks when some of that tasks register variables and other tasks in the group use those variable.\n\nExample:\n\nmain.yml:\n\n```\n- include: extra.yml\n when: do_extra is defined\n```\n\nextra.yml:\n\n```\n- name: check if we can do task A\n shell: check_if_task_A_possible\n register: A_possible\n ignore_errors: yes\n\n- name: task A\n shell: run_task_A\n when: A_possible.rc == 0\n```\n\nNow if you run main.yml and 'do_extra' is not defined, the run will fail on 'task A' because when the 'when' condition is evaluated, the variable A_possible will not exist.\n\nIt is not sufficient to just add the top-level include conditional above the other because right now it looks like the two conditions are compounded and tested together which will still fail because A_possible is not defined. I think you would have to run the file level conditional before the task level ones to keep this from happening.\n", "pr_html_url": "https://github.com/ansible/ansible/pull/20158", "file_loc": "{'base_commit': '4cdb266dac852859f695b0555cbe49e58343e69a', 'files': [{'path': 'lib/ansible/modules/windows/win_robocopy.ps1', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25, 26, 27, 28, 73, 76, 93, 94, 95, 114, 115, 167, 168]}}}, {'path': 'lib/ansible/modules/windows/win_robocopy.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [132]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/modules/windows/win_robocopy.ps1", "lib/ansible/modules/windows/win_robocopy.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "psf", "repo_name": "requests", "base_commit": "f5dacf84468ab7e0631cc61a3f1431a32e3e143c", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/2654", "iss_label": "Feature Request\nContributor Friendly", "title": "utils.get_netrc_auth silently fails when netrc exists but fails to parse", "body": "My .netrc contains a line for the github auth, [like this](https://gist.github.com/wikimatze/9790374).\n\nIt turns out that `netrc.netrc()` doesn't like that:\n\n```\n>>> from netrc import netrc\n>>> netrc()\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py\", line 35, in __init__\n self._parse(file, fp, default_netrc)\n File \"/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py\", line 117, in _parse\n file, lexer.lineno)\nnetrc.NetrcParseError: bad follower token 'protocol' (/Users/david/.netrc, line 9)\n```\n\n`get_netrc_auth` catches the `NetrcParseError` [but just ignores it](https://github.com/kennethreitz/requests/blob/master/requests/utils.py#L106).\n\nAt least having it emit a warning would have saved some hair-pulling.\n", "pr_html_url": "https://github.com/psf/requests/pull/2656", "file_loc": "{'base_commit': 'f5dacf84468ab7e0631cc61a3f1431a32e3e143c', 'files': [{'path': 'requests/utils.py', 'status': 'modified', 'Loc': {\"(None, 'get_netrc_auth', 70)\": {'mod': [70, 108, 109]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/utils.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "0877741b0350d200be7f1e6cca2780a25ee29cd0", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5851", "iss_label": "bug", "title": "Inference failing using ExLlamav2 version 0.0.18", "body": "### Describe the bug\r\n\r\nSince ExLlamav2 was upgraded to version 0.0.18 in the requirements.txt, inference using it is no longer working and fails with the error in the logs below. Reverting to version 0.0.17 resolves the issue.\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n1. Install latest main branch (current commit is `26d822f64f2a029306b250b69dc58468662a4fc6`)\r\n2. Download `GPTQ` model\r\n3. Use `ExLlamav2_HF` model loader\r\n4. Go to `Chat` tab and ask the AI a question.\r\n5. Observe error, even though the model loaded successfully.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\n21:35:11-061459 INFO Loading \"TheBloke_dolphin-2.6-mistral-7B-GPTQ\"\r\n21:35:13-842112 INFO LOADER: \"ExLlamav2\"\r\n21:35:13-843422 INFO TRUNCATION LENGTH: 32768\r\n21:35:13-844234 INFO INSTRUCTION TEMPLATE: \"Alpaca\"\r\n21:35:13-845014 INFO Loaded the model in 2.78 seconds.\r\nTraceback (most recent call last):\r\n File \"/workspace/text-generation-webui/modules/text_generation.py\", line 429, in generate_reply_custom\r\n for reply in shared.model.generate_with_streaming(question, state):\r\n File \"/workspace/text-generation-webui/modules/exllamav2.py\", line 140, in generate_with_streaming\r\n self.generator.begin_stream(ids, settings, loras=self.loras)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 198, in begin_stream\r\n self.begin_stream_ex(input_ids,\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 296, in begin_stream_ex\r\n self._gen_begin_reuse(input_ids, gen_settings)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 624, in _gen_begin_reuse\r\n self._gen_begin(in_tokens, gen_settings)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 586, in _gen_begin\r\n self.model.forward(self.sequence_ids[:, :-1],\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py\", line 694, in forward\r\n r, ls = self._forward(input_ids = input_ids[:, chunk_begin : chunk_end],\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py\", line 776, in _forward\r\n x = module.forward(x, cache = cache, attn_params = attn_params, past_len = past_len, loras = loras, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/attn.py\", line 596, in forward\r\n attn_output = flash_attn_func(q_states, k_states, v_states, causal = True)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 825, in flash_attn_func\r\n return FlashAttnFunc.apply(\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py\", line 553, in apply\r\n return super().apply(*args, **kwargs) # type: ignore[misc]\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 507, in forward\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_forward(\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 51, in _flash_attn_forward\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.fwd(\r\nRuntimeError: CUDA error: an illegal memory access was encountered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\n\r\n### System Info\r\n\r\n* Ubuntu 22.04 LTS\r\n* Nvidia A5000 GPU on Runpod\r\n* CUDA 12.1\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/0877741b0350d200be7f1e6cca2780a25ee29cd0", "file_loc": "{'base_commit': '0877741b0350d200be7f1e6cca2780a25ee29cd0', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}, {'path': 'requirements_amd.txt', 'status': 'modified', 'Loc': {'(None, None, 45)': {'mod': [45, 46, 47]}}}, {'path': 'requirements_amd_noavx2.txt', 'status': 'modified', 'Loc': {'(None, None, 43)': {'mod': [43, 44, 45]}}}, {'path': 'requirements_apple_intel.txt', 'status': 'modified', 'Loc': {'(None, None, 41)': {'mod': [41]}}}, {'path': 'requirements_apple_silicon.txt', 'status': 'modified', 'Loc': {'(None, None, 43)': {'mod': [43]}}}, {'path': 'requirements_noavx2.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements_apple_silicon.txt", "requirements_amd_noavx2.txt", "requirements_apple_intel.txt", "requirements_amd.txt", "requirements.txt", "requirements_noavx2.txt"], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "0877741b0350d200be7f1e6cca2780a25ee29cd0", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5851", "iss_label": "bug", "title": "Inference failing using ExLlamav2 version 0.0.18", "body": "### Describe the bug\r\n\r\nSince ExLlamav2 was upgraded to version 0.0.18 in the requirements.txt, inference using it is no longer working and fails with the error in the logs below. Reverting to version 0.0.17 resolves the issue.\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n1. Install latest main branch (current commit is `26d822f64f2a029306b250b69dc58468662a4fc6`)\r\n2. Download `GPTQ` model\r\n3. Use `ExLlamav2_HF` model loader\r\n4. Go to `Chat` tab and ask the AI a question.\r\n5. Observe error, even though the model loaded successfully.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\n21:35:11-061459 INFO Loading \"TheBloke_dolphin-2.6-mistral-7B-GPTQ\"\r\n21:35:13-842112 INFO LOADER: \"ExLlamav2\"\r\n21:35:13-843422 INFO TRUNCATION LENGTH: 32768\r\n21:35:13-844234 INFO INSTRUCTION TEMPLATE: \"Alpaca\"\r\n21:35:13-845014 INFO Loaded the model in 2.78 seconds.\r\nTraceback (most recent call last):\r\n File \"/workspace/text-generation-webui/modules/text_generation.py\", line 429, in generate_reply_custom\r\n for reply in shared.model.generate_with_streaming(question, state):\r\n File \"/workspace/text-generation-webui/modules/exllamav2.py\", line 140, in generate_with_streaming\r\n self.generator.begin_stream(ids, settings, loras=self.loras)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 198, in begin_stream\r\n self.begin_stream_ex(input_ids,\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 296, in begin_stream_ex\r\n self._gen_begin_reuse(input_ids, gen_settings)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 624, in _gen_begin_reuse\r\n self._gen_begin(in_tokens, gen_settings)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py\", line 586, in _gen_begin\r\n self.model.forward(self.sequence_ids[:, :-1],\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py\", line 694, in forward\r\n r, ls = self._forward(input_ids = input_ids[:, chunk_begin : chunk_end],\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py\", line 776, in _forward\r\n x = module.forward(x, cache = cache, attn_params = attn_params, past_len = past_len, loras = loras, **kwargs)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/attn.py\", line 596, in forward\r\n attn_output = flash_attn_func(q_states, k_states, v_states, causal = True)\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 825, in flash_attn_func\r\n return FlashAttnFunc.apply(\r\n File \"/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py\", line 553, in apply\r\n return super().apply(*args, **kwargs) # type: ignore[misc]\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 507, in forward\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_forward(\r\n File \"/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py\", line 51, in _flash_attn_forward\r\n out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.fwd(\r\nRuntimeError: CUDA error: an illegal memory access was encountered\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n```\r\n\r\n\r\n### System Info\r\n\r\n* Ubuntu 22.04 LTS\r\n* Nvidia A5000 GPU on Runpod\r\n* CUDA 12.1\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/0877741b0350d200be7f1e6cca2780a25ee29cd0", "file_loc": "{'base_commit': '0877741b0350d200be7f1e6cca2780a25ee29cd0', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}, {'path': 'requirements_amd.txt', 'status': 'modified', 'Loc': {'(None, None, 45)': {'mod': [45, 46, 47]}}}, {'path': 'requirements_amd_noavx2.txt', 'status': 'modified', 'Loc': {'(None, None, 43)': {'mod': [43, 44, 45]}}}, {'path': 'requirements_apple_intel.txt', 'status': 'modified', 'Loc': {'(None, None, 41)': {'mod': [41]}}}, {'path': 'requirements_apple_silicon.txt', 'status': 'modified', 'Loc': {'(None, None, 43)': {'mod': [43]}}}, {'path': 'requirements_noavx2.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements_apple_silicon.txt", "requirements_amd_noavx2.txt", "requirements_apple_intel.txt", "requirements_amd.txt", "requirements.txt", "requirements_noavx2.txt"], "asset": []}} {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "89477ea9d3a83181b0222b732a81c71db9edf142", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/2013", "iss_label": "bug", "title": "[BUG] Another permissions error when installing with docker-compose", "body": "### Pre-check\n\n- [X] I have searched the existing issues and none cover this bug.\n\n### Description\n\nThis looks similar, but not the same as #1876\r\n\r\nAs for following the instructions, I've not seen any relevant guide to installing with Docker, hence working a bit blind. \r\n\r\nBackground: I'm trying to run this on an Asustor NAS, which offers very little ability to customize the environment. Ideally, I'd just like to be able to run this by pasting a docker-compose file into Portainer, and having it work it's magic from there:\r\n\r\n---\r\n\r\n```\r\nsal@halob:/volume1/home/sal/apps/private-gpt $ docker-compose up\r\n[+] Running 3/3\r\n \u2714 Network private-gpt_default Created 0.1s\r\n \u2714 Container private-gpt-ollama-1 Created 0.1s\r\n \u2714 Container private-gpt-private-gpt-1 Created 0.1s\r\nAttaching to ollama-1, private-gpt-1\r\nollama-1 | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.\r\nollama-1 | Your new public key is:\r\nollama-1 |\r\nollama-1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNQkShAIoUDyyueUTiCHM9/AZfZ+rxnUZgmh+YByBVB\r\nollama-1 |\r\nollama-1 | 2024/07/23 23:20:28 routes.go:1096: INFO server config env=\"map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]\"\r\nollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:778 msg=\"total blobs: 0\"\r\nollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:785 msg=\"total unused blobs removed: 0\"\r\nollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=routes.go:1143 msg=\"Listening on [::]:11434 (version 0.2.6)\"\r\nollama-1 | time=2024-07-23T23:20:28.318Z level=INFO source=payload.go:30 msg=\"extracting embedded files\" dir=/tmp/ollama1112441504/runners\r\nprivate-gpt-1 | 23:20:29.406 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'docker']\r\nollama-1 | time=2024-07-23T23:20:33.589Z level=INFO source=payload.go:44 msg=\"Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60102 cpu]\"\r\nollama-1 | time=2024-07-23T23:20:33.589Z level=INFO source=gpu.go:205 msg=\"looking for compatible GPUs\"\r\nollama-1 | time=2024-07-23T23:20:33.589Z level=WARN source=gpu.go:225 msg=\"CPU does not have minimum vector extensions, GPU inference disabled\" required=avx detected=\"no vector extensions\"\r\nollama-1 | time=2024-07-23T23:20:33.590Z level=INFO source=types.go:105 msg=\"inference compute\" id=0 library=cpu compute=\"\" driver=0.0 name=\"\" total=\"31.1 GiB\" available=\"28.1 GiB\"\r\nprivate-gpt-1 | There was a problem when trying to write in your cache folder (/nonexistent/.cache/huggingface/hub). You should set the environment variable TRANSFORMERS_CACHE to a writable directory.\r\nprivate-gpt-1 | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.\r\nprivate-gpt-1 | 23:20:40.419 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama\r\nprivate-gpt-1 | Traceback (most recent call last):\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 798, in get\r\nprivate-gpt-1 | return self._context[key]\r\nprivate-gpt-1 | ~~~~~~~~~~~~~^^^^^\r\nprivate-gpt-1 | KeyError: \r\nprivate-gpt-1 |\r\nprivate-gpt-1 | During handling of the above exception, another exception occurred:\r\nprivate-gpt-1 |\r\nprivate-gpt-1 | Traceback (most recent call last):\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 798, in get\r\nprivate-gpt-1 | return self._context[key]\r\nprivate-gpt-1 | ~~~~~~~~~~~~~^^^^^\r\nprivate-gpt-1 | KeyError: \r\nprivate-gpt-1 |\r\nprivate-gpt-1 | During handling of the above exception, another exception occurred:\r\nprivate-gpt-1 |\r\nprivate-gpt-1 | Traceback (most recent call last):\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 798, in get\r\nprivate-gpt-1 | return self._context[key]\r\nprivate-gpt-1 | ~~~~~~~~~~~~~^^^^^\r\nprivate-gpt-1 | KeyError: \r\nprivate-gpt-1 |\r\nprivate-gpt-1 | During handling of the above exception, another exception occurred:\r\nprivate-gpt-1 |\r\nprivate-gpt-1 | Traceback (most recent call last):\r\nprivate-gpt-1 | File \"\", line 198, in _run_module_as_main\r\nprivate-gpt-1 | File \"\", line 88, in _run_code\r\nprivate-gpt-1 | File \"/home/worker/app/private_gpt/__main__.py\", line 5, in \r\nprivate-gpt-1 | from private_gpt.main import app\r\nprivate-gpt-1 | File \"/home/worker/app/private_gpt/main.py\", line 6, in \r\nprivate-gpt-1 | app = create_app(global_injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/private_gpt/launcher.py\", line 63, in create_app\r\nprivate-gpt-1 | ui = root_injector.get(PrivateGptUi)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 974, in get\r\nprivate-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 800, in get\r\nprivate-gpt-1 | instance = self._get_instance(key, provider, self.injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 811, in _get_instance\r\nprivate-gpt-1 | return provider.get(injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 264, in get\r\nprivate-gpt-1 | return injector.create_object(self._cls)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 998, in create_object\r\nprivate-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1031, in call_with_injection\r\nprivate-gpt-1 | dependencies = self.args_to_inject(\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1079, in args_to_inject\r\nprivate-gpt-1 | instance: Any = self.get(interface)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 974, in get\r\nprivate-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 800, in get\r\nprivate-gpt-1 | instance = self._get_instance(key, provider, self.injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 811, in _get_instance\r\nprivate-gpt-1 | return provider.get(injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 264, in get\r\nprivate-gpt-1 | return injector.create_object(self._cls)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 998, in create_object\r\nprivate-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1031, in call_with_injection\r\nprivate-gpt-1 | dependencies = self.args_to_inject(\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1079, in args_to_inject\r\nprivate-gpt-1 | instance: Any = self.get(interface)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 974, in get\r\nprivate-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 91, in wrapper\r\nprivate-gpt-1 | return function(*args, **kwargs)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 800, in get\r\nprivate-gpt-1 | instance = self._get_instance(key, provider, self.injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 811, in _get_instance\r\nprivate-gpt-1 | return provider.get(injector)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 264, in get\r\nprivate-gpt-1 | return injector.create_object(self._cls)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 998, in create_object\r\nprivate-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py\", line 1040, in call_with_injection\r\nprivate-gpt-1 | return callable(*full_args, **dependencies)\r\nprivate-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/private_gpt/components/vector_store/vector_store_component.py\", line 114, in __init__\r\nprivate-gpt-1 | client = QdrantClient(\r\nprivate-gpt-1 | ^^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/qdrant_client.py\", line 117, in __init__\r\nprivate-gpt-1 | self._client = QdrantLocal(\r\nprivate-gpt-1 | ^^^^^^^^^^^^\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py\", line 66, in __init__\r\nprivate-gpt-1 | self._load()\r\nprivate-gpt-1 | File \"/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py\", line 97, in _load\r\nprivate-gpt-1 | os.makedirs(self.location, exist_ok=True)\r\nprivate-gpt-1 | File \"\", line 215, in makedirs\r\nprivate-gpt-1 | File \"\", line 225, in makedirs\r\nprivate-gpt-1 | PermissionError: [Errno 13] Permission denied: 'local_data/private_gpt'\r\n^CGracefully stopping... (press Ctrl+C again to force)\r\n[+] Stopping 2/2\r\n \u2714 Container private-gpt-private-gpt-1 Stopped 0.3s\r\n \u2714 Container private-gpt-ollama-1 Stopped \r\n```\n\n### Steps to Reproduce\n\n1. Clone the repo\r\n2. docker-compose build\r\n3. docker-compose up\n\n### Expected Behavior\n\nIt should just run\n\n### Actual Behavior\n\nError, as reported above\n\n### Environment\n\nRunning on an Asustor router, docker 25.0.5\n\n### Additional Information\n\n_No response_\n\n### Version\n\nlatest\n\n### Setup Checklist\n\n- [X] Confirm that you have followed the installation instructions in the project\u2019s documentation.\n- [X] Check that you are using the latest version of the project.\n- [X] Verify disk space availability for model storage and data processing.\n- [X] Ensure that you have the necessary permissions to run the project.\n\n### NVIDIA GPU Setup Checklist\n\n- [ ] Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to [CUDA's documentation](https://docs.nvidia.com/deploy/cuda-compatibility/#frequently-asked-questions))\n- [ ] Ensure an NVIDIA GPU is installed and recognized by the system (run `nvidia-smi` to verify).\n- [ ] Ensure proper permissions are set for accessing GPU resources.\n- [ ] Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run `sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi`)", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/2059", "file_loc": "{'base_commit': '89477ea9d3a83181b0222b732a81c71db9edf142', 'files': [{'path': 'Dockerfile.llamacpp-cpu', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 23, 30]}}}, {'path': 'Dockerfile.ollama', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 13, 20]}}}, {'path': 'docker-compose.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 29, 34], 'mod': [15, 47, 60]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docker-compose.yaml"], "test": [], "config": ["Dockerfile.ollama", "Dockerfile.llamacpp-cpu"], "asset": []}} {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "e04b8e70e60df88751af5cd667cafb66dc32b397", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/26590", "iss_label": "Bug", "title": "KNNImputer add_indicator fails to persist where missing data had been present in training", "body": "### Describe the bug\r\n\r\nHello, I've encountered an issue where the KNNImputer fails to record the fields where there were missing data at the time when `.fit` is called, but not recognised if `.transform` is called on a dense matrix. I would have expected it to return a 2x3 matrix rather than 2x2, with `missingindicator_A = False` for all cases.\r\n\r\nReproduction steps below. Any help much appreciated :)\r\n\r\n### Steps/Code to Reproduce\r\n\r\n```python\r\n>>> import pandas as pd\r\n>>> from sklearn.impute import KNNImputer\r\n>>> knn = KNNImputer(add_indicator=True)\r\n>>> df = pd.DataFrame({'A': [0, None], 'B': [1, 2]})\r\n>>> df\r\n A B\r\n0 0.0 1\r\n1 NaN 2\r\n>>> knn.fit(df)\r\nKNNImputer(add_indicator=True)\r\n>>> pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())\r\n A B missingindicator_A\r\n0 0.0 1.0 0.0\r\n1 0.0 2.0 1.0\r\n>>> df['A'] = 0\r\n>>> pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\n A B missingindicator_A\r\n0 0.0 1.0 0.0\r\n1 0.0 2.0 0.0\r\n```\r\n\r\n### Actual Results\r\n\r\n```pytb\r\n---------------------------------------------------------------------------\r\nValueError Traceback (most recent call last)\r\nCell In[30], line 1\r\n----> 1 pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pandas/core/frame.py:694, in DataFrame.__init__(self, data, index, columns, dtype, copy)\r\n 684 mgr = dict_to_mgr(\r\n 685 # error: Item \"ndarray\" of \"Union[ndarray, Series, Index]\" has no\r\n 686 # attribute \"name\"\r\n (...)\r\n 691 typ=manager,\r\n 692 )\r\n 693 else:\r\n--> 694 mgr = ndarray_to_mgr(\r\n 695 data,\r\n 696 index,\r\n 697 columns,\r\n 698 dtype=dtype,\r\n 699 copy=copy,\r\n 700 typ=manager,\r\n 701 )\r\n 703 # For data is list-like, or Iterable (will consume into list)\r\n 704 elif is_list_like(data):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pandas/core/internals/construction.py:351, in ndarray_to_mgr(values, index, columns, dtype, copy, typ)\r\n 346 # _prep_ndarray ensures that values.ndim == 2 at this point\r\n 347 index, columns = _get_axes(\r\n 348 values.shape[0], values.shape[1], index=index, columns=columns\r\n 349 )\r\n--> 351 _check_values_indices_shape_match(values, index, columns)\r\n 353 if typ == \"array\":\r\n 355 if issubclass(values.dtype.type, str):\r\n\r\nFile /opt/conda/lib/python3.10/site-packages/pandas/core/internals/construction.py:422, in _check_values_indices_shape_match(values, index, columns)\r\n 420 passed = values.shape\r\n 421 implied = (len(index), len(columns))\r\n--> 422 raise ValueError(f\"Shape of passed values is {passed}, indices imply {implied}\")\r\n\r\nValueError: Shape of passed values is (2, 2), indices imply (2, 3)\r\n```\r\n\r\n### Versions\r\n\r\n```shell\r\npython3, sklearn = 1.2.1\r\n```\r\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/26600", "file_loc": "{'base_commit': 'e04b8e70e60df88751af5cd667cafb66dc32b397', 'files': [{'path': 'doc/whats_new/v1.3.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'sklearn/impute/_knn.py', 'status': 'modified', 'Loc': {\"('KNNImputer', 'transform', 242)\": {'mod': [285]}}}, {'path': 'sklearn/impute/tests/test_common.py', 'status': 'modified', 'Loc': {\"(None, 'test_keep_empty_features', 171)\": {'add': [183]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/impute/_knn.py"], "doc": ["doc/whats_new/v1.3.rst"], "test": ["sklearn/impute/tests/test_common.py"], "config": [], "asset": []}} {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "9660ec7813a0e77ec3411682b0084d07b540084e", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/543", "iss_label": "", "title": "Adding sudo works for `aura -Sy` but not `aura -Ay`", "body": "`fuck` is unable to add `sudo` to an `aura -Ay` command:\n\n```\n$ aura -Ay foobar-beta-git # from AUR\naura >>= You have to use `sudo` for that.\n$ fuck\nNo fucks given\n```\n\nBut works as expected for `aura -Sy`:\n\n```\n$ aura -Sy foobar # pacman alias\nerror: you cannot perform this operation unless you are root.\naura >>= Please check your input.\n$ fuck\nsudo aura -Sy foobar [enter/\u2191/\u2193/ctrl+c]\n```\n\nIt's slightly annoying anyway that the `aura` outut is different in these cases, but is it possible for `thefuck` to work-around? Or is the only way for `aura` to give a stderr message containing \"root\"?\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/557", "file_loc": "{'base_commit': '9660ec7813a0e77ec3411682b0084d07b540084e', 'files': [{'path': 'thefuck/rules/sudo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/rules/sudo.py"], "doc": [], "test": [], "config": [], "asset": []}} @@ -58,12 +58,12 @@ {"organization": "Textualize", "repo_name": "rich", "base_commit": "ef1b9b91ccff680b7f931d75fd92c3caa6fcd622", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2083", "iss_label": "Needs triage", "title": "[BUG] typing: Progress in Group isn't happy", "body": "**Describe the bug**\r\n\r\nRunning mypy on the following code:\r\n\r\n```python\r\nfrom rich.console import Group\r\nfrom rich.progress import Progress\r\n\r\nouter_progress = Progress()\r\ninner_progress = Progress()\r\nlive_group = Group(outer_progress, inner_progress)\r\n```\r\n\r\nProduces:\r\n\r\n\r\n```console\r\n$ mypy --strict tmp.py\r\ntmp.py:6: error: Argument 1 to \"Group\" has incompatible type \"Progress\"; expected \"Union[ConsoleRenderable, RichCast, str]\"\r\ntmp.py:6: note: Following member(s) of \"Progress\" have conflicts:\r\ntmp.py:6: note: Expected:\r\ntmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, str]\r\ntmp.py:6: note: Got:\r\ntmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, RichCast, str]\r\ntmp.py:6: error: Argument 2 to \"Group\" has incompatible type \"Progress\"; expected \"Union[ConsoleRenderable, RichCast, str]\"\r\ntmp.py:6: note: Expected:\r\ntmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, str]\r\ntmp.py:6: note: Got:\r\ntmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, RichCast, str]\r\nFound 2 errors in 1 file (checked 1 source file)\r\n```\r\n\r\nI think `RichCast` should also be in the Protocol, that is, `__rich__` is allowed to return an object with `__rich__`, ~~or it should not be in `__rich__`, that is, `__rich__(self) -> Union[ConsoleRenderable, str]` should be used for all `__rich__` methods. Which is correct depends on runtime; can a `__rich__` return a `__rich__` which can return a `__rich__`, etc?~~. Ahah, I see `CHANGELOG.md:167:- Allowed `__rich__` to work recursively`, so it's the former.\r\n\r\nI'm preparing a PR.\r\n\r\n**Platform**\r\n
\r\nClick to expand\r\n\r\nWhat platform (Win/Linux/Mac) are you running on? What terminal software are you using?\r\n\r\nI may ask you to copy and paste the output of the following commands. It may save some time if you do it now.\r\n\r\nIf you're using Rich in a terminal:\r\n\r\n```\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'truecolor' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 68 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions(size=ConsoleDimensions(width=298, height=68), legacy_windows=False, min_width=1, max_width=298, is_terminal=True, encoding='utf-8', max_height=68, justify=None, overflow=None, no_wrap=False, highlight=None, markup=None, height=None) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=298, height=68) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 298 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 {'TERM': 'xterm-256color', 'COLORTERM': 'truecolor', 'CLICOLOR': None, 'NO_COLOR': None, 'TERM_PROGRAM': 'iTerm.app', 'COLUMNS': None, 'LINES': None, 'JPY_PARENT_PID': None, 'VSCODE_VERBOSE_LOGGING': None} \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Darwin\"\r\nrich==11.2.0\r\n```\r\n\r\n(Same issue after upgrading to Rich 12)\r\n\r\n
\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/2089", "file_loc": "{'base_commit': 'ef1b9b91ccff680b7f931d75fd92c3caa6fcd622', 'files': [{'path': 'rich/console.py', 'status': 'modified', 'Loc': {\"('RichCast', None, 265)\": {'mod': [268]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["rich/console.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "3da26192cba7dbaa3109fc0454e658ec417aaf5f", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/89", "iss_label": "", "title": "feature request: replace history with corrected command.", "body": "It would be a nice feature to correct the command and the history.\nI would also like an option to not add {fuck,thefuck} to the history.\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/384", "file_loc": "{'base_commit': '3da26192cba7dbaa3109fc0454e658ec417aaf5f', 'files': [{'path': 'thefuck/shells.py', 'status': 'modified', 'Loc': {\"('Fish', 'app_alias', 128)\": {'mod': [129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["thefuck/shells.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "61e722aa126207efcdbc1ddcd4453854ad44ea09", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/10251", "iss_label": "", "title": "Extending Criterion", "body": "Unless I'm missing something, it's not completely trivial how one can use a custom `sklearn.tree._criterion.Criterion` for a decision tree. See my use case [here](https://stats.stackexchange.com/q/316954/98500).\r\n\r\nThings I have tried include:\r\n\r\n- Import the `ClassificationCriterion` in Python and subclass it. It seems that `node_impurity` and `children_impurity` do not get called, the impurity is always 0 (perhaps because they are `cdef` and not `cpdef`?). I'm also unsure what the parameters to `__new__` / `__cinit__` should be (e.g. `1` and `np.array([2], dtype='intp')` for a binary classification problem?), or how to pass them properly: I have to create the `Criterion` object from outside the tree to circumvent [the check on the `criterion` argument](https://github.com/scikit-learn/scikit-learn/blob/a24c8b464d094d2c468a16ea9f8bf8d42d949f84/sklearn/tree/tree.py#L324).\r\n\r\n- Extend `ClassificationCriterion` in a Cython file. This seems to work, but (a) it requires exporting `ClassificationCriterion` from `_criterion.pxd` and (b) it would be nice if it would be documented more extensively what should be done in `node_impurity` and `children_impurity`. I will post my code below once it seems to work correctly.\r\n\r\nMay I propose one of the following to make this easier?\r\n\r\n- Document what should be done to extend the class in Cython or Python - if Python should be allowed: I am aware of the performance issue with that, but in some cases it may be OK to do this in Python - I don't know.\r\n- Make it possible to pass a function or other object not extending `Criterion` to the tree, similar to how it is very easy to implement a custom scorer for validation functions. That would require changing the checks [here](https://github.com/scikit-learn/scikit-learn/blob/a24c8b464d094d2c468a16ea9f8bf8d42d949f84/sklearn/tree/tree.py#L324).", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10325", "file_loc": "{'base_commit': '61e722aa126207efcdbc1ddcd4453854ad44ea09', 'files': [{'path': 'sklearn/tree/_criterion.pxd', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [67]}}}, {'path': 'sklearn/tree/_criterion.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [215, 216, 707]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/tree/_criterion.pxd", "sklearn/tree/_criterion.pyx"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "3d19272be75fe32edd4cf01cb2eeac2281305e42", "is_iss": 0, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/27682", "iss_label": "good first issue\ncython", "title": "MAINT Directly `cimport` interfaces from `std::algorithm`", "body": "Some Cython implementations use interfaces from the standard library of C++, namely `std::algorithm::move` and `std::algorithm::fill` from [`std::algorithm`](https://en.cppreference.com/w/cpp/algorithm/).\r\n\r\nBefore Cython 3, those interfaces had to be imported directly using the verbose syntax from Cython:\r\n - https://github.com/scikit-learn/scikit-learn/blob/5fc67aeb092d636895b599921283221a68c7a2ad/sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp#L22-L26\r\n - https://github.com/scikit-learn/scikit-learn/blob/5fc67aeb092d636895b599921283221a68c7a2ad/sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp#L28-L33\r\n\r\nCython 3 introduced the following line natively, for those interfaces. Those interfaces should now be `cimported` directly. That is one can replace the line shown above respectively with:\r\n\r\n```cython\r\nfrom libcpp.algorithm cimport move\r\nfrom libcpp.algorithm cimport fill\r\n```\r\n\r\nI believe this is a good first Cython issue.\r\n\r\nAny reader should feel free to pick it up. It might be possible that there is some context missing.\r\n\r\nPlease let me know if you need help. :slightly_smiling_face: ", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/28489", "commit_html_url": null, "file_loc": "{'base_commit': '3d19272be75fe32edd4cf01cb2eeac2281305e42', 'files': [{'path': 'sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp', 'status': 'modified', 'Loc': {'(None, None, 16)': {'add': [16]}, '(None, None, 28)': {'mod': [28, 29, 30, 31, 32, 33]}}}, {'path': 'sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp', 'status': 'modified', 'Loc': {'(None, None, 6)': {'add': [6]}, '(None, None, 22)': {'mod': [22, 23, 24, 25, 26]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp", "sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "3d19272be75fe32edd4cf01cb2eeac2281305e42", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/27682", "iss_label": "good first issue\ncython", "title": "MAINT Directly `cimport` interfaces from `std::algorithm`", "body": "Some Cython implementations use interfaces from the standard library of C++, namely `std::algorithm::move` and `std::algorithm::fill` from [`std::algorithm`](https://en.cppreference.com/w/cpp/algorithm/).\r\n\r\nBefore Cython 3, those interfaces had to be imported directly using the verbose syntax from Cython:\r\n - https://github.com/scikit-learn/scikit-learn/blob/5fc67aeb092d636895b599921283221a68c7a2ad/sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp#L22-L26\r\n - https://github.com/scikit-learn/scikit-learn/blob/5fc67aeb092d636895b599921283221a68c7a2ad/sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp#L28-L33\r\n\r\nCython 3 introduced the following line natively, for those interfaces. Those interfaces should now be `cimported` directly. That is one can replace the line shown above respectively with:\r\n\r\n```cython\r\nfrom libcpp.algorithm cimport move\r\nfrom libcpp.algorithm cimport fill\r\n```\r\n\r\nI believe this is a good first Cython issue.\r\n\r\nAny reader should feel free to pick it up. It might be possible that there is some context missing.\r\n\r\nPlease let me know if you need help. :slightly_smiling_face: ", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/28489", "commit_html_url": null, "file_loc": "{'base_commit': '3d19272be75fe32edd4cf01cb2eeac2281305e42', 'files': [{'path': 'sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp', 'status': 'modified', 'Loc': {'(None, None, 16)': {'add': [16]}, '(None, None, 28)': {'mod': [28, 29, 30, 31, 32, 33]}}}, {'path': 'sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp', 'status': 'modified', 'Loc': {'(None, None, 6)': {'add': [6]}, '(None, None, 22)': {'mod': [22, 23, 24, 25, 26]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp", "sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "fastapi", "repo_name": "fastapi", "base_commit": "033bc2a6c9aec3a245eb1f1b4fadb2fbb7a514b8", "iss_has_pr": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/429", "iss_label": "bug\nreviewed", "title": "OpenAPI: HTTP_422 response does not use custom media_type", "body": "**Describe the bug**\r\nFastAPI automatically adds an HTTP_422 response to all paths in the OpenAPI specification that have parameters or request body. This response does not use the media_type of response_class if any custom defined. Furthermore, it overwrites any error object format with the default one.\r\n\r\n**To Reproduce**\r\nCreate a path with parameters and add custom response_class to decorator. Add custom exception handlers that reformat the default error responses as per your liking. Then observe generated openapi.json\r\n\r\n```python\r\nfrom fastapi import FastAPI, HTTPException\r\nfrom fastapi.exceptions import RequestValidationError\r\nfrom starlette import status\r\nfrom starlette.responses import JSONResponse\r\nfrom . import schemas\r\n\r\napp = FastAPI()\r\n\r\nclass JsonApiResponse(JSONResponse):\r\n media_type = 'application/vnd+json.api'\r\n\r\n@app.exception_handler(HTTPException)\r\nasync def http_exception_handler(request, exc: HTTPException) -> JsonApiResponse:\r\n headers = getattr(exc, \"headers\", None)\r\n content = schemas.ErrorResponse(errors=[dict(title=\"Bad request\", detail=exc.detail, status=exc.status_code)]).dict()\r\n status_code = exc.status_code\r\n if headers:\r\n return JsonApiResponse(content=content, status_code=status_code, headers=headers)\r\n else:\r\n return JsonApiResponse(content=content, status_code=status_code)\r\n\r\n@app.exception_handler(RequestValidationError)\r\nasync def request_validation_exception_handler(request, exc: RequestValidationError) -> JsonApiResponse:\r\n http422 = status.HTTP_422_UNPROCESSABLE_ENTITY\r\n return JsonApiResponse(\r\n content=schemas.ErrorResponse(errors=[\r\n dict(title=err['type'], detail=err['msg'], source='/'.join(err['loc']), status=http422)\r\n for err in exc.errors()\r\n ]).dict(),\r\n status_code=http422,\r\n )\r\n\r\n@app.post('/customers',\r\n status_code=status.HTTP_201_CREATED,\r\n response_model=schemas.CustomerDetailsResponse,\r\n response_class=JsonApiResponse,\r\n )\r\ndef customer_create(data: schemas.Customer = Body(..., media_type='application/vnd+json.api', embed=True)):\r\n created_customer = {**data.dict(), **{'id': '1'}}\r\n return {'data': created_customer}\r\n``` \r\n\r\nThe openapi.json will include the unwanted 422 response with the FastAPI default error object definitions:\r\n\r\n```yaml\r\n # ...\r\n '422':\r\n description: Validation Error\r\n content:\r\n application/json:\r\n schema:\r\n \"$ref\": \"#/components/schemas/HTTPValidationError\"\r\n```\r\n\r\n**Expected behavior**\r\nAt least, the media_type of the response_class should be respected. But the best would be if the 422 would not be added to the specification unless requested via the path decorator. Or if the 422 definitions of mine were respected.\r\n\r\n```python\r\n@app.post('/customers',\r\n status_code=status.HTTP_201_CREATED,\r\n response_model=schemas.CustomerDetailsResponse,\r\n response_class=JsonApiResponse,\r\n responses={\r\n 422: {\r\n 'model': schemas.ErrorResponse\r\n },\r\n })\r\ndata: schemas.Customer = Body(..., media_type='application/vnd+json.api', embed=True)):\r\n pass\r\n```\r\n\r\n**Environment:**\r\n - OS: masOS 10.14.6\r\n - Python: 3.6.5\r\n - FastAPI: 0.35.0", "pr_html_url": "https://github.com/fastapi/fastapi/pull/437", "file_loc": "{'base_commit': '033bc2a6c9aec3a245eb1f1b4fadb2fbb7a514b8', 'files': [{'path': 'fastapi/openapi/utils.py', 'status': 'modified', 'Loc': {\"(None, 'get_openapi_path', 142)\": {'add': [227], 'mod': [162, 163, 164, 165, 175, 176, 177, 178, 179, 191, 219, 220]}, \"(None, 'get_openapi_operation_parameters', 72)\": {'mod': [74, 75, 80, 81, 82, 94]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["fastapi/openapi/utils.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "d692a72bf3809df35d802041211fcd81d56b1dc6", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/710", "iss_label": "enhancement\nseverity:low", "title": "Tune rate-limit backoff", "body": "**What problem or use case are you trying to solve?**\r\nDue to the AnthropicException error, which indicates that the request limit has been reached, it is necessary to increase the interval between requests. This will prevent system overload and provide a stable service.\r\n\r\n**Describe the UX of the solution you'd like**\r\nFrom a user experience (UX) perspective, the most important aspect is to send requests at an appropriate interval. Sending requests too frequently will cause errors, while sending requests at too long an interval will result in longer response times. Therefore, finding the right balance is crucial. Additionally, informing users about the current status and estimated wait time would also contribute to a good UX.\r\n\r\n**Do you have thoughts on the technical implementation?**\r\nFrom a technical implementation standpoint, a mechanism to monitor and manage request limits is required. For example, tracking the number of requests and the time they were made, and stopping requests for a certain period of time once the limit is reached. Additionally, implementing an algorithm to dynamically adjust the request interval could be more efficient.\r\n\r\n**Additional context**\r\nAn additional consideration is the error handling mechanism. When a request limit error occurs, appropriate exception handling and retry logic should be implemented. Additionally, through logging and monitoring systems, the system's status should be continuously monitored, and issues should be promptly detected.", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/1120", "file_loc": "{'base_commit': 'd692a72bf3809df35d802041211fcd81d56b1dc6', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [179]}}}, {'path': 'agenthub/monologue_agent/utils/memory.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 7, 12]}}}, {'path': 'agenthub/monologue_agent/utils/monologue.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6], 'mod': [1]}, \"('Monologue', 'get_total_length', 44)\": {'mod': [56]}, \"('Monologue', 'condense', 59)\": {'mod': [67, 77, 78]}}}, {'path': 'opendevin/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 42, 49], 'mod': [23, 24]}}}, {'path': 'opendevin/controller/agent_controller.py', 'status': 'modified', 'Loc': {\"('AgentController', 'step', 154)\": {'add': [175], 'mod': [173, 181, 182, 185, 186, 188, 189, 191]}, '(None, None, None)': {'mod': [2, 6, 7]}}}, {'path': 'opendevin/llm/llm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15], 'mod': [3, 4, 8, 13, 14]}, \"('LLM', None, 18)\": {'add': [18]}, \"('LLM', '__init__', 19)\": {'add': [25], 'mod': [23, 24, 27, 38, 39, 40, 41, 42, 46]}}}, {'path': 'opendevin/schema/config.py', 'status': 'modified', 'Loc': {\"('ConfigType', None, 4)\": {'mod': [17]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["agenthub/monologue_agent/utils/memory.py", "opendevin/schema/config.py", "opendevin/llm/llm.py", "agenthub/monologue_agent/utils/monologue.py", "opendevin/config.py", "opendevin/controller/agent_controller.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "d16396138e8a61f9bc2c3c36ae8c4d7420d23782", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/663", "iss_label": "enhancement\nsweep", "title": "Sweep: Bump the release version in pyproject.toml", "body": "\n\n\n
\nChecklist\n\n- [X] `pyproject.toml`\n> \u2022 Locate the line where the version number is specified. It should be under the [project] section and the line should start with \"version = \".\n> \u2022 Determine the new version number according to the semantic versioning rules. If only minor changes or bug fixes have been made, increment the patch version. If new features have been added in a backwards-compatible manner, increment the minor version. If changes have been made that are not backwards-compatible, increment the major version.\n> \u2022 Update the version number in the pyproject.toml file. Replace the old version number with the new version number.\n> \u2022 Check if there are any dependencies or other parts of the project that rely on the version number. If there are, update these parts of the project as well.\n> \u2022 Commit the changes and push to the repository.\n\n
\n", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/666", "file_loc": "{'base_commit': 'd16396138e8a61f9bc2c3c36ae8c4d7420d23782', 'files': [{'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["pyproject.toml"], "asset": []}} {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "e748ca50ca3e83ac703e02538a27236fedd53a7d", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/728", "iss_label": "bug", "title": "get_func_args maximum recursion", "body": "https://github.com/scrapy/scrapy/blob/master/scrapy/utils/python.py#L149\n\nToday I was working on a project were I have to skip the first item of a list, and then join the rest. Instead of writing the typical slice I tried something much more good looking `Compose(itemgetter(slice(1, None)), Join())` but I found out this maximum recursion. I did some research and ask @dangra about it, but nothing came up.\nI think the main problem is that `inspect` isn't able recognize `itemgetter` as `something`.\n\n``` python\n>>> inspect.getmembers(itemgetter(2))\n[('__call__',\n ),\n ('__class__', ),\n ('__delattr__',\n ),\n ('__doc__',\n 'itemgetter(item, ...) --> itemgetter object\\n\\nReturn a callable object that fetches the given item(s) from its operand.\\nAfter, f=itemgetter(2), the call f(r) returns r[2].\\nAfter, g=itemgetter(2,5,3), the call g(r) returns (r[2], r[5], r[3])'),\n ('__format__',\n ),\n ('__getattribute__',\n ),\n ('__hash__',\n ),\n ('__init__',\n ),\n ('__new__', ),\n ('__reduce__',\n ),\n ('__reduce_ex__',\n ),\n ('__repr__',\n ),\n ('__setattr__',\n ),\n ('__sizeof__',\n ),\n ('__str__',\n ),\n ('__subclasshook__',\n )]\n>>> inspect.getargspec(itemgetter(2).__call__)\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/usr/lib/python2.7/inspect.py\", line 815, in getargspec\n raise TypeError('{!r} is not a Python function'.format(func))\nTypeError: is not a Python function\n>>> inspect.getargspec(itemgetter(slice(None, 2)).__init__)\nTraceback (most recent call last):\n File \"\", line 1, in \n File \"/usr/lib/python2.7/inspect.py\", line 815, in getargspec\n raise TypeError('{!r} is not a Python function'.format(func))\nTypeError: is not a Python function\n```\n\nEDIT: Looks like the reason was C functions weren't covered by inspect module until Python 3.4 (http://bugs.python.org/issue17481)\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/809", "file_loc": "{'base_commit': 'e748ca50ca3e83ac703e02538a27236fedd53a7d', 'files': [{'path': 'scrapy/tests/test_utils_python.py', 'status': 'modified', 'Loc': {\"('UtilsPythonTestCase', 'test_get_func_args', 158)\": {'add': [195]}}}, {'path': 'scrapy/utils/python.py', 'status': 'modified', 'Loc': {\"(None, 'get_func_args', 134)\": {'add': [149]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["scrapy/utils/python.py"], "doc": [], "test": ["scrapy/tests/test_utils_python.py"], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "626a0a01471accc32ded29ccca3ed93c4995fcd6", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/9954", "iss_label": "TensorFlow\nTests\nGood First Issue", "title": "[Good first issue] LXMERT TensorFlow Integration tests", "body": "The TensorFlow implementation of the LXMERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.\r\n\r\nThe [test_modeling_tf_lxmert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_lxmert.py) file should be updated to include integration testing.\r\n\r\nAn example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:\r\n\r\nhttps://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387\r\n\r\nSome additional tips:\r\n- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.\r\n- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.\r\n- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time.", "code": null, "pr_html_url": "https://github.com/huggingface/transformers/pull/12497", "commit_html_url": null, "file_loc": "{'base_commit': '626a0a01471accc32ded29ccca3ed93c4995fcd6', 'files': [{'path': 'tests/test_modeling_tf_lxmert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, \"('TFLxmertModelTest', 'test_saved_model_creation_extended', 710)\": {'add': [770]}, \"('TFLxmertModelTest', 'test_pt_tf_model_equivalence', 487)\": {'mod': [558]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["tests/test_modeling_tf_lxmert.py"], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "626a0a01471accc32ded29ccca3ed93c4995fcd6", "iss_html_url": "https://github.com/huggingface/transformers/issues/9954", "iss_label": "TensorFlow\nTests\nGood First Issue", "title": "[Good first issue] LXMERT TensorFlow Integration tests", "body": "The TensorFlow implementation of the LXMERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.\r\n\r\nThe [test_modeling_tf_lxmert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_lxmert.py) file should be updated to include integration testing.\r\n\r\nAn example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:\r\n\r\nhttps://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387\r\n\r\nSome additional tips:\r\n- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.\r\n- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.\r\n- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time.", "code": null, "pr_html_url": "https://github.com/huggingface/transformers/pull/12497", "commit_html_url": null, "file_loc": "{'base_commit': '626a0a01471accc32ded29ccca3ed93c4995fcd6', 'files': [{'path': 'tests/test_modeling_tf_lxmert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, \"('TFLxmertModelTest', 'test_saved_model_creation_extended', 710)\": {'add': [770]}, \"('TFLxmertModelTest', 'test_pt_tf_model_equivalence', 487)\": {'mod': [558]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["tests/test_modeling_tf_lxmert.py"], "config": [], "asset": []}} {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "710df2140555030e4d86e669d6df2deb852bcaf5", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/24115", "iss_label": "Bug\nDatetime\nAlgos", "title": "DTA/TDA/PA inplace methods should actually be inplace", "body": "At the moment we are using the implementations designed for Index subclasses, which return new objects.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/30505", "file_loc": "{'base_commit': '710df2140555030e4d86e669d6df2deb852bcaf5', 'files': [{'path': 'doc/source/whatsnew/v1.0.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [719]}}}, {'path': 'pandas/core/arrays/datetimelike.py', 'status': 'modified', 'Loc': {\"('DatetimeLikeArrayMixin', None, 316)\": {'mod': [1314]}, \"('DatetimeLikeArrayMixin', '__iadd__', 1315)\": {'mod': [1316, 1317]}, \"('DatetimeLikeArrayMixin', '__isub__', 1319)\": {'mod': [1320, 1321]}}}, {'path': 'pandas/tests/arrays/test_datetimelike.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [227]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": ["pandas/core/arrays/datetimelike.py"], "doc": ["doc/source/whatsnew/v1.0.0.rst"], "test": ["pandas/tests/arrays/test_datetimelike.py"], "config": [], "asset": []}} {"organization": "3b1b", "repo_name": "manim", "base_commit": "0092ac9a2a20873c7c077cefc4d68397a6df2ada", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/30", "iss_label": "", "title": "TypeError while running a triangle.py scene", "body": "I got an error when I try to run some of the [old_projects/triangle_of_power/triangle.py](https://github.com/3b1b/manim/blob/master/old_projects/triangle_of_power/triangle.py) scene.\r\nMy command is:\r\n```\r\npython extract_scene.py -p old_projects/triangle_of_power/triangle.py DrawInsideTriangle\r\n```\r\n\r\nBut after that I get:\r\n```\r\nTraceback (most recent call last):\r\n File \"extract_scene.py\", line 187, in main\r\n handle_scene(SceneClass(**scene_kwargs), **config)\r\n File \"/home/loic/Sources/Git/manim/scene/scene.py\", line 47, in __init__\r\n self.construct(*self.construct_args)\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 527, in construct\r\n top = TOP()\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 91, in __init__\r\n VMobject.__init__(self, **kwargs)\r\n File \"/home/loic/Sources/Git/manim/mobject/mobject.py\", line 33, in __init__\r\n self.generate_points()\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 104, in generate_points\r\n self.set_values(self.x, self.y, self.z)\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 108, in set_values\r\n self.set_value(i, mob)\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 111, in set_value\r\n self.values[index] = self.put_on_vertex(index, value)\r\n File \"/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py\", line 125, in put_on_vertex\r\n value.center()\r\n File \"/home/loic/Sources/Git/manim/mobject/mobject.py\", line 230, in center\r\n self.shift(-self.get_center())\r\n File \"/home/loic/Sources/Git/manim/mobject/mobject.py\", line 124, in shift\r\n mob.points += total_vector\r\nTypeError: Cannot cast ufunc add output from dtype('float64') to dtype('int64') with casting rule 'same_kind'\r\n```\r\nAnd then the fail sound.\r\n\r\nIs there something wrong in what am I doing?", "pr_html_url": "https://github.com/3b1b/manim/pull/31", "file_loc": "{'base_commit': '0092ac9a2a20873c7c077cefc4d68397a6df2ada', 'files': [{'path': 'mobject/mobject.py', 'status': 'modified', 'Loc': {\"('Mobject', 'shift', 121)\": {'mod': [123]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["mobject/mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "5cae13fd0a9b6e5a6f3f39c798cf693675795d89", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/733", "iss_label": "", "title": "LLM may generate comments inside [CONTENT][/CONTENT] , which causes parsing the JSON to fail.", "body": "**Bug description**\r\n```\r\nparse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting ',' delimiter: line 6 column 27 (char 135)\r\n```\r\n\r\n\r\n**Bug solved method**\r\n\r\n\r\n\r\nPerhaps we could consider adding a constraint to the prompt, indicating not to generate comments inside [CONTENT][/CONTENT], or alternatively, we could trim the comments from the LLM's output.\r\n\r\n**Environment information**\r\n\r\n\r\n- LLM type and model name: OPENAI gpt-4-1106-preview\r\n- System version: macos 12.5.1\r\n- Python version: python 3.9\r\n\r\n\r\n\r\n- packages version: metagpt commit 82a5eec72707dee44174eae8f8ff1490a6819ecd\r\n- installation method: pip install from source\r\n\r\n**Screenshots or logs**\r\n\r\n\r\n```\r\n[CONTENT]\r\n{\r\n \"Required Python packages\": [\r\n \"numpy==1.21.2\",\r\n \"Kivy==2.0.0\",\r\n \"pygame==2.0.1\",\r\n \"sqlite3==2.6.0\" # sqlite3 is included in Python's standard library, but versioning is for consistency\r\n ],\r\n \"Required Other language third-party packages\": [\r\n \"No third-party dependencies required\"\r\n ],\r\n \"Logic Analysis\": [\r\n [\r\n \"game.py\",\r\n \"Contains Game class with core game logic, uses numpy for array manipulation, and interacts with UI and Storage classes\"\r\n ],\r\n [\r\n \"main.py\",\r\n \"Contains main function, initializes the game by calling start_new_game() from Game class\"\r\n ],\r\n [\r\n \"ui.py\",\r\n \"Contains UI class for user interface, uses Kivy for rendering, and interacts with Game class\"\r\n ],\r\n [\r\n \"storage.py\",\r\n \"Contains Storage class for saving and loading high scores using SQLite\"\r\n ]\r\n ],\r\n \"Task list\": [\r\n \"storage.py\",\r\n \"game.py\",\r\n \"ui.py\",\r\n \"main.py\"\r\n ],\r\n \"Full API spec\": \"\",\r\n \"Shared Knowledge\": \"'game.py' contains the Game class which is central to the game logic and is used by both 'ui.py' for rendering the game state and 'storage.py' for saving the high score.\",\r\n \"Anything UNCLEAR\": \"The monetization strategy for the game is not specified. Will the game include ads, in-app purchases, or be a paid app? This will affect the design of the user interface and potentially the choice of libraries or frameworks.\"\r\n}\r\n[/CONTENT]\r\n2024-01-10 14:58:53.419 | INFO | metagpt.utils.cost_manager:update_cost:48 - Total running cost: $0.199 | Max budget: $10.000 | Current cost: $0.021, prompt_tokens: 1021, completion_tokens: 352\r\n2024-01-10 14:58:53.423 | WARNING | metagpt.utils.repair_llm_raw_output:run_and_passon:235 - parse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting ',' delimiter: line 6 column 27 (char 135)\r\n2024-01-10 14:58:53.424 | INFO | metagpt.utils.repair_llm_raw_output:repair_invalid_json:204 - repair_invalid_json, raw error: Expecting ',' delimiter: line 6 column 27 (char 135)\r\n2024-01-10 14:58:53.424 | ERROR | metagpt.utils.common:log_it:438 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 222.144(s), this was the 6th time calling it. exp: RetryError[]\r\n2024-01-10 14:58:53.424 | WARNING | metagpt.utils.common:wrapper:510 - There is a exception in role's execution, in order to resume, we delete the newest role communication message in the role's memory.\r\n2024-01-10 14:58:53.430 | ERROR | metagpt.utils.common:wrapper:492 - Exception occurs, start to serialize the project, exp:\r\n```\r\n", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/963", "file_loc": "{'base_commit': '5cae13fd0a9b6e5a6f3f39c798cf693675795d89', 'files': [{'path': 'config/config2.example.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15], 'mod': [6]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.example.yaml"], "asset": []}} @@ -75,7 +75,7 @@ {"organization": "Textualize", "repo_name": "rich", "base_commit": "2ee992b17ef5ff3c34f89545b0d57ad4690a64fc", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2422", "iss_label": "Needs triage", "title": "[BUG] Databricks is not identified as Jupyter", "body": "You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).\r\n\r\n**Describe the bug**\r\n\r\nDatabricks is not considered as \"Jupyter\", therefore `JUPYTER_LINES` and `JUPYTER_COLUMNS` has no effect on the console log\r\n\r\nProvide a minimal code example that demonstrates the issue if you can. If the issue is visual in nature, consider posting a screenshot.\r\n\r\nDatabricks has a Ipython type `InteractiveShell` which is neither `Ipython` or `ZMQInteractiveShell`\r\n\r\n![image](https://user-images.githubusercontent.com/18221871/181251880-531dbfc5-0f35-44ba-a1c2-c07e0a075cc7.png)\r\n\r\n\r\n```python\r\ndef _is_jupyter() -> bool: # pragma: no cover\r\n \"\"\"Check if we're running in a Jupyter notebook.\"\"\"\r\n try:\r\n get_ipython # type: ignore[name-defined]\r\n except NameError:\r\n return False\r\n ipython = get_ipython() # type: ignore[name-defined]\r\n shell = ipython.__class__.__name__\r\n if \"google.colab\" in str(ipython.__class__) or shell == \"ZMQInteractiveShell\":\r\n return True # Jupyter notebook or qtconsole\r\n elif shell == \"TerminalInteractiveShell\":\r\n return False # Terminal running IPython\r\n else:\r\n return False # Other type (?)\r\n```\r\n\r\nIf you're using Rich in a terminal:\r\n\r\n```\r\npython -m rich.diagnose\r\npip freeze | grep rich\r\n```\r\n\r\nIf you're using Rich in a Jupyter Notebook, run the following snippet in a cell\r\nand paste the output in your bug report.\r\n\r\n```python\r\nfrom rich.diagnose import report\r\nreport()\r\n```\r\n\r\n```\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = None \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = \u2502\r\n\u2502 height = 25 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = False \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = False \u2502\r\n\u2502 legacy_windows = False \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=80, height=25), \u2502\r\n\u2502 legacy_windows=False, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=80, \u2502\r\n\u2502 is_terminal=False, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=25, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=80, height=25) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 80 \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500 \u2500\u2500\u2500\u2500\u256e\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'unknown', \u2502\r\n\u2502 'COLORTERM': None, \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': None, \u2502\r\n\u2502 'COLUMNS': None, \u2502\r\n\u2502 'LINES': None, \u2502\r\n\u2502 'JUPYTER_COLUMNS': '200', \u2502\r\n\u2502 'JUPYTER_LINES': '50', \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nplatform=\"Linux\"\r\n```\r\n\r\n\r\n", "pr_html_url": "https://github.com/Textualize/rich/pull/2424", "file_loc": "{'base_commit': '2ee992b17ef5ff3c34f89545b0d57ad4690a64fc', 'files': [{'path': 'CHANGELOG.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}}}, {'path': 'CONTRIBUTORS.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16]}}}, {'path': 'rich/console.py', 'status': 'modified', 'Loc': {\"(None, '_is_jupyter', 511)\": {'mod': [519]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["rich/console.py"], "doc": ["CONTRIBUTORS.md", "CHANGELOG.md"], "test": [], "config": [], "asset": []}} {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "65d7a9b9902ad85f27b17d759bd13b59c2afc474", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/590", "iss_label": "", "title": "Please update README.md", "body": "I recently tried using it by following the steps in the README.md file and it does not work, please update the file.\r\n\r\nI keep getting this error when i try to export/set the API key\r\n\r\nopenai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored\r\nin a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/592", "file_loc": "{'base_commit': '65d7a9b9902ad85f27b17d759bd13b59c2afc474', 'files': [{'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, \"(None, 'load_env_if_needed', 19)\": {'add': [21]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt_engineer/main.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "2b6f70fdb4f0238b2cf6afdb6473a764e090060f", "iss_has_pr": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/226", "iss_label": "", "title": "Cannot import name 'BaseLanguageModel' from 'langchain.schema'", "body": "**Describe the bug**\r\nA clear and concise description of what the bug is.\r\n\r\n**Browser and Version**\r\n - N/A\r\n - macOS 13.3.1 (22E261)\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Install miniconda with Python 3.10.10\r\n2. Install langflow\r\n3. Run langflow\r\n4. See error:\r\nImportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (/Users/user/miniconda3/lib/python3.10/site-packages/langchain/schema.py)\r\n", "pr_html_url": "https://github.com/langflow-ai/langflow/pull/229", "file_loc": "{'base_commit': '2b6f70fdb4f0238b2cf6afdb6473a764e090060f', 'files': [{'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [706, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 1711, 1717, 1718, 3955, 3961, 3962, 3963, 3964, 3965, 3966, 3967, 3968, 3969, 3970, 3971, 3972, 3973, 3974, 3975, 3976, 3977, 3978, 3979, 3980, 3981, 3982, 3983, 3984, 3985, 3986, 3987, 3988, 3989, 3990, 3991, 3992, 3993, 3994, 3995, 3996, 3997, 3998, 3999, 4000, 4001, 4499, 4505, 4506]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}}}, {'path': 'src/backend/langflow/interface/agents/custom.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [31]}}}, {'path': 'src/backend/langflow/interface/agents/prebuilt.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}, {'path': 'src/backend/langflow/interface/tools/util.py', 'status': 'modified', 'Loc': {\"(None, 'get_func_tool_params', 8)\": {'mod': [22, 24, 25, 26]}}}, {'path': 'src/backend/langflow/interface/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}, {'path': 'src/backend/langflow/template/nodes.py', 'status': 'modified', 'Loc': {\"('ChainFrontendNode', 'format_field', 536)\": {'add': [561]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/backend/langflow/interface/agents/custom.py", "src/backend/langflow/interface/utils.py", "src/backend/langflow/template/nodes.py", "src/backend/langflow/interface/tools/util.py", "src/backend/langflow/interface/agents/prebuilt.py"], "doc": [], "test": [], "config": ["pyproject.toml", "poetry.lock"], "asset": []}} -{"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6", "is_iss": 0, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/834", "iss_label": "bug", "title": "Old node modules need cleared out (Cannot read properties of null (reading 'edgesOut')", "body": "\r\n#### Describe the bug\r\ntrying to run make build on the latest code and it ends up in this error:\r\n\r\nCannot read properties of null (reading 'edgesOut')\r\n\r\n#### Setup and configuration\r\n**Current version**:\r\n\r\n```\r\ncommit 229fa988c575c291cff6ffc1f9d15814d9d2a884 (HEAD -> main, origin/main, origin/HEAD)\r\nAuthor: Xingyao Wang \r\nDate: Sun Apr 7 01:04:17 2024 +0800\r\n\r\n remove seed=42 to fix #813 (#830)\r\n```\r\n\r\n\r\n**My config.toml and environment vars** (be sure to redact API keys):\r\n```\r\nLLM_API_KEY=\"ollama\"\r\nLLM_MODEL=\"ollama/dolphin-mixtral:latest\"\r\nLLM_EMBEDDING_MODEL=\"local\"\r\nLLM_BASE_URL=\"http://localhost:11434\"\r\nWORKSPACE_DIR=\"./workspace\"\r\n```\r\n\r\n**My model and agent** (you can see these settings in the UI):\r\n* Model:\r\n* Agent:\r\n\r\n**Commands I ran to install and run OpenDevin**:\r\n```\r\nmake build\r\n```\r\n\r\n**Steps to Reproduce**:\r\n1. pull latest code\r\n2. make build\r\n3.\r\n\r\n**Logs, error messages, and screenshots**:\r\n```\r\n142 http fetch GET 200 https://registry.npmjs.org/@swc%2fcore 6ms (cache hit)\r\n143 silly fetch manifest @swc/helpers@^0.5.0\r\n144 http fetch GET 200 https://registry.npmjs.org/@swc%2fhelpers 2ms (cache hit)\r\n145 silly fetch manifest postcss@^8.4.12\r\n146 http fetch GET 200 https://registry.npmjs.org/postcss 6ms (cache hit)\r\n147 silly fetch manifest typescript@>=4.1.0\r\n148 http fetch GET 200 https://registry.npmjs.org/typescript 50ms (cache hit)\r\n149 silly fetch manifest typescript@^4.9.5\r\n150 silly fetch manifest vitest@^0.29.2\r\n151 silly fetch manifest @vitest/browser@*\r\n152 silly fetch manifest vitest@1.4.0\r\n153 silly fetch manifest @types/node@^18.0.0 || >=20.0.0\r\n154 timing idealTree Completed in 4380ms\r\n155 timing command:install Completed in 4385ms\r\n156 verbose stack TypeError: Cannot read properties of null (reading 'edgesOut')\r\n156 verbose stack at #loadPeerSet (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:1313:38)\r\n156 verbose stack at async #buildDepStep (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:924:11)\r\n156 verbose stack at async Arborist.buildIdealTree (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:203:7)\r\n156 verbose stack at async Promise.all (index 1)\r\n156 verbose stack at async Arborist.reify (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:154:5)\r\n156 verbose stack at async Install.exec (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/lib/commands/install.js:153:5)\r\n156 verbose stack at async module.exports (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/lib/cli-entry.js:61:5)\r\n157 verbose cwd /home/atlas/OpenDevin/frontend\r\n158 verbose Linux 6.6.4-060604-generic\r\n159 verbose node v18.20.1\r\n160 verbose npm v10.5.0\r\n161 error Cannot read properties of null (reading 'edgesOut')\r\n162 verbose exit 1\r\n163 timing npm Completed in 4511ms\r\n164 verbose unfinished npm timer reify 1712423688807\r\n165 verbose unfinished npm timer reify:loadTrees 1712423688810\r\n166 verbose unfinished npm timer idealTree:buildDeps 1712423691257\r\n167 verbose unfinished npm timer idealTree:node_modules/.pnpm/@monaco-editor+react@4.6.0_monaco-editor@0.47.0_react-dom@18.2.0_react@18.2.0/node_modules/@monaco-editor/react 1712423692071\r\n168 verbose code 1\r\n169 error A complete log of this run can be found in: /home/atlas/.npm/_logs/2024-04-06T17_14_48_682Z-debug-0.log\r\n\r\n```\r\n#### Additional Context\r\n\r\n", "code": null, "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/867", "commit_html_url": null, "file_loc": "{'base_commit': 'a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6', 'files': [{'path': 'opendevin/logging.py', 'status': 'modified', 'Loc': {\"(None, 'get_llm_prompt_file_handler', 118)\": {'mod': [123]}, \"(None, 'get_llm_response_file_handler', 128)\": {'mod': [133]}, '(None, None, None)': {'mod': [139, 144]}}}]}", "own_code_loc": [], "ass_file_loc": ["frontend/node_modules"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["opendevin/logging.py"], "doc": [], "test": [], "config": [], "asset": ["frontend/node_modules"]}} +{"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6", "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/834", "iss_label": "bug", "title": "Old node modules need cleared out (Cannot read properties of null (reading 'edgesOut')", "body": "\r\n#### Describe the bug\r\ntrying to run make build on the latest code and it ends up in this error:\r\n\r\nCannot read properties of null (reading 'edgesOut')\r\n\r\n#### Setup and configuration\r\n**Current version**:\r\n\r\n```\r\ncommit 229fa988c575c291cff6ffc1f9d15814d9d2a884 (HEAD -> main, origin/main, origin/HEAD)\r\nAuthor: Xingyao Wang \r\nDate: Sun Apr 7 01:04:17 2024 +0800\r\n\r\n remove seed=42 to fix #813 (#830)\r\n```\r\n\r\n\r\n**My config.toml and environment vars** (be sure to redact API keys):\r\n```\r\nLLM_API_KEY=\"ollama\"\r\nLLM_MODEL=\"ollama/dolphin-mixtral:latest\"\r\nLLM_EMBEDDING_MODEL=\"local\"\r\nLLM_BASE_URL=\"http://localhost:11434\"\r\nWORKSPACE_DIR=\"./workspace\"\r\n```\r\n\r\n**My model and agent** (you can see these settings in the UI):\r\n* Model:\r\n* Agent:\r\n\r\n**Commands I ran to install and run OpenDevin**:\r\n```\r\nmake build\r\n```\r\n\r\n**Steps to Reproduce**:\r\n1. pull latest code\r\n2. make build\r\n3.\r\n\r\n**Logs, error messages, and screenshots**:\r\n```\r\n142 http fetch GET 200 https://registry.npmjs.org/@swc%2fcore 6ms (cache hit)\r\n143 silly fetch manifest @swc/helpers@^0.5.0\r\n144 http fetch GET 200 https://registry.npmjs.org/@swc%2fhelpers 2ms (cache hit)\r\n145 silly fetch manifest postcss@^8.4.12\r\n146 http fetch GET 200 https://registry.npmjs.org/postcss 6ms (cache hit)\r\n147 silly fetch manifest typescript@>=4.1.0\r\n148 http fetch GET 200 https://registry.npmjs.org/typescript 50ms (cache hit)\r\n149 silly fetch manifest typescript@^4.9.5\r\n150 silly fetch manifest vitest@^0.29.2\r\n151 silly fetch manifest @vitest/browser@*\r\n152 silly fetch manifest vitest@1.4.0\r\n153 silly fetch manifest @types/node@^18.0.0 || >=20.0.0\r\n154 timing idealTree Completed in 4380ms\r\n155 timing command:install Completed in 4385ms\r\n156 verbose stack TypeError: Cannot read properties of null (reading 'edgesOut')\r\n156 verbose stack at #loadPeerSet (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:1313:38)\r\n156 verbose stack at async #buildDepStep (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:924:11)\r\n156 verbose stack at async Arborist.buildIdealTree (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:203:7)\r\n156 verbose stack at async Promise.all (index 1)\r\n156 verbose stack at async Arborist.reify (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:154:5)\r\n156 verbose stack at async Install.exec (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/lib/commands/install.js:153:5)\r\n156 verbose stack at async module.exports (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/lib/cli-entry.js:61:5)\r\n157 verbose cwd /home/atlas/OpenDevin/frontend\r\n158 verbose Linux 6.6.4-060604-generic\r\n159 verbose node v18.20.1\r\n160 verbose npm v10.5.0\r\n161 error Cannot read properties of null (reading 'edgesOut')\r\n162 verbose exit 1\r\n163 timing npm Completed in 4511ms\r\n164 verbose unfinished npm timer reify 1712423688807\r\n165 verbose unfinished npm timer reify:loadTrees 1712423688810\r\n166 verbose unfinished npm timer idealTree:buildDeps 1712423691257\r\n167 verbose unfinished npm timer idealTree:node_modules/.pnpm/@monaco-editor+react@4.6.0_monaco-editor@0.47.0_react-dom@18.2.0_react@18.2.0/node_modules/@monaco-editor/react 1712423692071\r\n168 verbose code 1\r\n169 error A complete log of this run can be found in: /home/atlas/.npm/_logs/2024-04-06T17_14_48_682Z-debug-0.log\r\n\r\n```\r\n#### Additional Context\r\n\r\n", "code": null, "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/867", "commit_html_url": null, "file_loc": "{'base_commit': 'a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6', 'files': [{'path': 'opendevin/logging.py', 'status': 'modified', 'Loc': {\"(None, 'get_llm_prompt_file_handler', 118)\": {'mod': [123]}, \"(None, 'get_llm_response_file_handler', 128)\": {'mod': [133]}, '(None, None, None)': {'mod': [139, 144]}}}]}", "own_code_loc": [], "ass_file_loc": ["frontend/node_modules"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["opendevin/logging.py"], "doc": [], "test": [], "config": [], "asset": ["frontend/node_modules"]}} {"organization": "localstack", "repo_name": "localstack", "base_commit": "2fe8440b619329891db150e45910e8aaad97b7ce", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/4987", "iss_label": "type: bug\nstatus: triage needed\naws:s3", "title": "bug: The Content-MD5 you specified did not match what we received", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nI started getting the following exception\r\n\r\n```\r\ncom.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received. \r\n(Service: Amazon S3; Status Code: 400; Error Code: BadDigest; Request ID: null; S3 Extended Request ID: null; Proxy: null)\r\n```\r\n\r\nafter upgrade to `localstack/localstack-light:latest`, reverting back to `localstack/localstack-light:0.13.0` fixes it for me.\r\n\n\n### Expected Behavior\n\nNo exception.\n\n### How are you starting LocalStack?\n\nCustom (please describe below)\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n Using https://www.testcontainers.org/ to start the test.\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n```\r\n@Bean\r\npublic AmazonS3 createAmazonS3() {\r\n final DockerImageName diName = DockerImageName.parse(\"localstack/localstack-light:latest\").asCompatibleSubstituteFor(\"localstack/localstack\");\r\n final LocalStackContainer localstack = new LocalStackContainer(diName)\r\n .withServices(S3);\r\n localstack.addEnv(\"AWS_ACCESS_KEY\", \"test\");\r\n localstack.addEnv(\"AWS_SECRET_ACCESS_KEY\", \"567\");\r\n localstack.addEnv(\"AWS_REGION\", \"us-east-1\");\r\n localstack.addEnv(\"LS_LOG\", \"trace\");\r\n localstack.start();\r\n return AmazonS3ClientBuilder\r\n .standard()\r\n .withEndpointConfiguration(localstack.getEndpointConfiguration(S3))\r\n .withCredentials(localstack.getDefaultCredentialsProvider())\r\n .build();\r\n }\r\n```\r\n\r\nthen calling `store` on `org.springframework.core.io.Resource` which is `SimpleStorageResource`.\r\n\n\n### Environment\n\n```markdown\n- OS: macOS Catalina 10.15.7\r\n- LocalStack: latest\n```\n\n\n### Anything else?\n\n`LS_LOG=trace` with `localstack/localstack-light:0.13.0`\r\n\r\n```\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: IN(s3): \"GET /test-bucket-name/test-runtime.properties\" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52476', 'Amz-Sdk-Invocation-Id': '307eaac4-b1b6-d23e-96b1-a6dcff7d5414', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=72f59f88e302656e9e4c77308f1de7925f5b63aec3efec93dd9d5f32ae6a2b6d', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'X-Amz-Date': '20211122T191203Z', 'Content-Length': '0', 'Connection': 'Keep-Alive', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52476', 'x-localstack-edge': 'http://127.0.0.1:52476'} - data: b''\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): \"GET /test-bucket-name/test-runtime.properties\" - status: 404 - response headers: {'x-amzn-requestid': 'UJFL1535CHVAFPN2JLH2ACBUQX026PCCCTNN0RSBF4PJHULNR1AR', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'x-amz-request-id': '3DAD4B54E96B3CA1', 'x-amz-id-2': 'MzRISOwyjmnup3DAD4B54E96B3CA17/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'\\n\\n NoSuchKey\\n The specified key does not exist.\\n \\n 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE\\n'\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): \"GET /test-bucket-name/test-runtime.properties\" - status: 404 - response headers: {'x-amzn-requestid': 'UJFL1535CHVAFPN2JLH2ACBUQX026PCCCTNN0RSBF4PJHULNR1AR', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'x-amz-request-id': '3DAD4B54E96B3CA1', 'x-amz-id-2': 'MzRISOwyjmnup3DAD4B54E96B3CA17/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'\\n\\n NoSuchKey\\n The specified key does not exist.\\n \\n 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE\\n'\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: IN(s3): \"PUT /test-bucket-name/test-runtime.properties\" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52476', 'Amz-Sdk-Invocation-Id': '8a6682d3-1481-f538-4ed4-4ac03c4e4ec3', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=282e9062c19a5a575d49902c3c642928039a210c8d5eb54de069655f10ef94ea', 'Content-Md5': 'pX8KKuGXS1f2VTcuJpqjkw==', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD', 'X-Amz-Date': '20211122T191203Z', 'X-Amz-Decoded-Content-Length': '147', 'Content-Length': '320', 'Connection': 'Keep-Alive', 'Expect': '100-continue', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52476', 'x-localstack-edge': 'http://127.0.0.1:52476'} - data: b'93;chunk-signature=68bf4c0366a3d4c963efb7eaf3426c439ac26f9ca077b6c71e1bd60de69f0259\\r\\n#20211122+0100\\n#Mon Nov 22 20:12:03 CET 2021\\nlast.sync.url.test-space-key=2822a50f-4992-425a-b8fb-923735a9ddff317e3479-5907-46cf-b33a-60da9709274f\\n\\r\\n0;chunk-signature=bf3a6ecc9d3913d2ad6618d420c1db6abefb4f452469693ffc5bbd038ad2f2f0\\r\\n\\r\\n'\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): \"PUT /test-bucket-name/test-runtime.properties\" - status: 200 - response headers: {'ETag': '\"a57f0a2ae1974b57f655372e269aa393\"', 'last-modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Content-Length': '0', 'x-amzn-requestid': '1EYVT7AJ5TJ3JH1SK3ZVTHBBB860EIC4FTOP9VPHCSHR967AFFAP', 'Content-Type': 'text/html; charset=utf-8', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Location': '/test-bucket-name', 'x-amz-request-id': '5BC855D1EAAEFD00', 'x-amz-id-2': 'MzRISOwyjmnup5BC855D1EAAEFD007/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp'} - response: b''\r\n2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): \"PUT /test-bucket-name/test-runtime.properties\" - status: 200 - response headers: {'ETag': '\"a57f0a2ae1974b57f655372e269aa393\"', 'last-modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Content-Length': '0', 'x-amzn-requestid': '1EYVT7AJ5TJ3JH1SK3ZVTHBBB860EIC4FTOP9VPHCSHR967AFFAP', 'Content-Type': 'text/html; charset=utf-8', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Location': '/test-bucket-name', 'x-amz-request-id': '5BC855D1EAAEFD00', 'x-amz-id-2': 'MzRISOwyjmnup5BC855D1EAAEFD007/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp'} - response: b''\r\n```\r\n\r\n----\r\n\r\n`LS_LOG=trace` with `localstack/localstack-light:latest`\r\n\r\n```\r\n2021-11-22T19:10:42.097:DEBUG:localstack.services.edge: IN(s3): \"GET /test-bucket-name/test-runtime.properties\" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52438', 'Amz-Sdk-Invocation-Id': '3f452c53-2a97-15f7-8f44-96c3b3d4aa27', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=a8c7d475d338c92c01eca9638e858e8f0e84ae73498435a55520ee04ff655476', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'X-Amz-Date': '20211122T191042Z', 'Content-Length': '0', 'Connection': 'Keep-Alive', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52438', 'x-localstack-edge': 'http://127.0.0.1:52438'} - data: b''\r\n2021-11-22T19:10:42.118:DEBUG:localstack.services.edge: OUT(s3): \"GET /test-bucket-name/test-runtime.properties\" - status: 404 - response headers: {'x-amzn-requestid': 'RMJVBYKAH478ETR8T1G9DQ4TUHEIKKB96892NRKM3PYQYRVUPI8M', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:10:42 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:42 GMT', 'x-amz-request-id': '7D83EFCB204B6EC9', 'x-amz-id-2': 'MzRISOwyjmnup7D83EFCB204B6EC97/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'\\n\\n NoSuchKey\\n The specified key does not exist.\\n \\n 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE\\n'\r\n2021-11-22T19:10:42.119:DEBUG:localstack.services.edge: OUT(s3): \"GET /test-bucket-name/test-runtime.properties\" - status: 404 - response headers: {'x-amzn-requestid': 'RMJVBYKAH478ETR8T1G9DQ4TUHEIKKB96892NRKM3PYQYRVUPI8M', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:10:42 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:42 GMT', 'x-amz-request-id': '7D83EFCB204B6EC9', 'x-amz-id-2': 'MzRISOwyjmnup7D83EFCB204B6EC97/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'\\n\\n NoSuchKey\\n The specified key does not exist.\\n \\n 7a62c49f-347e-4fc4-9331-6e8eEXAMPLE\\n'\r\n2021-11-22T19:10:45.164:DEBUG:localstack.services.edge: IN(s3): \"PUT /test-bucket-name/test-runtime.properties\" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52438', 'Amz-Sdk-Invocation-Id': '3446d18f-08a6-2432-a4dc-f79846c9655e', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=56f95a44e31918932bc863893064a1fcafbf4066d44bc44c8d078cf420316011', 'Content-Md5': 'Xi4HEV9K00jfK4+6lHxpDA==', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD', 'X-Amz-Date': '20211122T191045Z', 'X-Amz-Decoded-Content-Length': '147', 'Content-Length': '320', 'Connection': 'Keep-Alive', 'Expect': '100-continue', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52438', 'x-localstack-edge': 'http://127.0.0.1:52438'} - data: b'93;chunk-signature=5be6b2d473e96bb9f297444da60bdf0ff8f5d2e211e1d551b3cf3646c0946641\\r\\n#20211122+0100\\n#Mon Nov 22 20:10:44 CET 2021\\nlast.sync.url.test-space-key=2822a50f-4992-425a-b8fb-923735a9ddff317e3479-5907-46cf-b33a-60da9709274f\\n\\r\\n0;chunk-signature=bd5c830b94346b57ddc8805ba26c44a122256c207014433bf6579b0985f21df7\\r\\n\\r\\n'\r\n2021-11-22T19:10:45.167:DEBUG:localstack.services.edge: OUT(s3): \"PUT /test-bucket-name/test-runtime.properties\" - status: 400 - response headers: {'Content-Type': 'application/xml', 'Location': '/test-bucket-name', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:45 GMT', 'x-amz-request-id': '20278550A22502FB', 'x-amz-id-2': 'MzRISOwyjmnup20278550A22502FB7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'Content-Length': '156'} - response: \r\nBadDigestThe Content-MD5 you specified did not match what we received.\r\n2021-11-22T19:10:45.168:DEBUG:localstack.services.edge: OUT(s3): \"PUT /test-bucket-name/test-runtime.properties\" - status: 400 - response headers: {'Content-Type': 'application/xml', 'Location': '/test-bucket-name', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:45 GMT', 'x-amz-request-id': '20278550A22502FB', 'x-amz-id-2': 'MzRISOwyjmnup20278550A22502FB7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'Content-Length': '156'} - response: \r\nBadDigestThe Content-MD5 you specified did not match what we received.\r\n```", "pr_html_url": "https://github.com/localstack/localstack/pull/5001", "file_loc": "{'base_commit': '2fe8440b619329891db150e45910e8aaad97b7ce', 'files': [{'path': 'localstack/services/s3/s3_listener.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 883], 'mod': [61, 62]}, \"(None, 'check_content_md5', 884)\": {'add': [884]}}}, {'path': 'tests/integration/test_s3.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 2, 51]}, \"(None, 'test_cors_with_allowed_origins', 2662)\": {'add': [2779]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack/services/s3/s3_listener.py"], "doc": [], "test": ["tests/integration/test_s3.py"], "config": [], "asset": []}} {"organization": "localstack", "repo_name": "localstack", "base_commit": "8c9d9b0475247f667a0f184f2fbc6d66b955749f", "iss_has_pr": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/11696", "iss_label": "type: bug\nstatus: resolved/fixed\naws:apigateway", "title": "bug: API Gateway does not persist correctly when you restart the localstack docker container", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nI have a working api gateway created with localstack. When I restart the container and try to query the same url, I get this message:\r\n`{\"message\": \"The API id '0e0cf92f' does not correspond to a deployed API Gateway API\"}`.\r\n\r\n# Details:\r\nFirst I create my API and confirm it works:\r\n```\r\n$ awslocal apigatewayv2 get-apis\r\n{\r\n \"Items\": [\r\n {\r\n \"ApiEndpoint\": \"http://0e0cf92f.execute-api.localhost.localstack.cloud:4566\",\r\n \"ApiId\": \"0e0cf92f\",\r\n \"ApiKeySelectionExpression\": \"$request.header.x-api-key\",\r\n \"CorsConfiguration\": {\r\n \"AllowHeaders\": [\r\n \"*\"\r\n ],\r\n \"AllowMethods\": [\r\n \"*\"\r\n ],\r\n \"AllowOrigins\": [\r\n \"*\"\r\n ],\r\n \"ExposeHeaders\": [\r\n \"*\"\r\n ]\r\n },\r\n \"CreatedDate\": \"2024-10-16T05:24:49.452000+00:00\",\r\n \"DisableExecuteApiEndpoint\": false,\r\n \"Name\": \"XpedigoAPI_v2\",\r\n \"ProtocolType\": \"HTTP\",\r\n \"RouteSelectionExpression\": \"$request.method $request.path\",\r\n \"Tags\": {},\r\n \"Version\": \"2024-09-25 01:18:37UTC\"\r\n }\r\n ]\r\n}\r\n```\r\n```\r\n$ awslocal apigatewayv2 get-stages --api-id=0e0cf92f\r\n{\r\n \"Items\": [\r\n {\r\n \"CreatedDate\": \"2024-10-16T05:24:49.524619+00:00\",\r\n \"DefaultRouteSettings\": {\r\n \"DetailedMetricsEnabled\": false\r\n },\r\n \"DeploymentId\": \"4d3d207f\",\r\n \"LastUpdatedDate\": \"2024-10-16T05:24:49.524619+00:00\",\r\n \"RouteSettings\": {},\r\n \"StageName\": \"local\",\r\n \"StageVariables\": {\r\n \"baseurl\": \"alb-localstack-bdowson.ngrok.io\",\r\n \"env\": \"local\"\r\n },\r\n \"Tags\": {}\r\n }\r\n ]\r\n}\r\n```\r\n```\r\n$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f\r\n{\r\n \"Items\": [\r\n {\r\n \"AutoDeployed\": false,\r\n \"CreatedDate\": \"2024-10-16T05:24:49.529068+00:00\",\r\n \"DeploymentId\": \"4d3d207f\",\r\n \"DeploymentStatus\": \"DEPLOYED\"\r\n }\r\n ]\r\n}\r\n```\r\nConfirm it works:\r\n```\r\n$ curl -v https://0e0cf92f.execute-api.localhost.localstack.cloud:4566/local/accounts/health\r\n* Trying 127.0.0.1:4566...\r\n* TCP_NODELAY set\r\n* Connected to 0e0cf92f.execute-api.localhost.localstack.cloud (127.0.0.1) port 4566 (#0)\r\n* ALPN, offering h2\r\n* ALPN, offering http/1.1\r\n* successfully set certificate verify locations:\r\n* CAfile: /etc/ssl/certs/ca-certificates.crt\r\n CApath: /etc/ssl/certs\r\n* TLSv1.3 (OUT), TLS handshake, Client hello (1):\r\n* TLSv1.3 (IN), TLS handshake, Server hello (2):\r\n* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):\r\n* TLSv1.3 (IN), TLS handshake, Certificate (11):\r\n* TLSv1.3 (IN), TLS handshake, CERT verify (15):\r\n* TLSv1.3 (IN), TLS handshake, Finished (20):\r\n* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):\r\n* TLSv1.3 (OUT), TLS handshake, Finished (20):\r\n* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384\r\n* ALPN, server accepted to use h2\r\n* Server certificate:\r\n* subject: CN=localhost.localstack.cloud\r\n* start date: Sep 6 00:00:00 2024 GMT\r\n* expire date: Dec 5 23:59:59 2024 GMT\r\n* subjectAltName: host \"0e0cf92f.execute-api.localhost.localstack.cloud\" matched cert's \"*.execute-api.localhost.localstack.cloud\"\r\n* issuer: C=AT; O=ZeroSSL; CN=ZeroSSL RSA Domain Secure Site CA\r\n* SSL certificate verify ok.\r\n* Using HTTP2, server supports multi-use\r\n* Connection state changed (HTTP/2 confirmed)\r\n* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0\r\n* Using Stream ID: 1 (easy handle 0x5b8d78082650)\r\n> GET /local/accounts/health HTTP/2\r\n> Host: 0e0cf92f.execute-api.localhost.localstack.cloud:4566\r\n> user-agent: curl/7.68.0\r\n> accept: */*\r\n> \r\n* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):\r\n* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):\r\n* old SSL session ID is stale, removing\r\n* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!\r\n< HTTP/2 200 \r\n< server: TwistedWeb/24.3.0\r\n< date: Wed, 16 Oct 2024 05:25:16 GMT\r\n< content-type: text/html; charset=UTF-8\r\n< cache-control: private, must-revalidate\r\n< expires: -1\r\n< pragma: no-cache\r\n< x-powered-by: PHP/8.1.9RC1\r\n< content-length: 2\r\n< apigw-requestid: 5f9a3aa7\r\n< \r\n* Connection #0 to host 0e0cf92f.execute-api.localhost.localstack.cloud left intact\r\nOK\r\n```\r\n\r\nNow I stop localstack, and restart it with `docker-compose up`. The api gateway no longer works correctly:\r\n```\r\n$ curl -v https://0e0cf92f.execute-api.localhost.localstack.cloud:4566/local/accounts/health\r\n* Trying 127.0.0.1:4566...\r\n* TCP_NODELAY set\r\n* Connected to 0e0cf92f.execute-api.localhost.localstack.cloud (127.0.0.1) port 4566 (#0)\r\n* ALPN, offering h2\r\n* ALPN, offering http/1.1\r\n* successfully set certificate verify locations:\r\n* CAfile: /etc/ssl/certs/ca-certificates.crt\r\n CApath: /etc/ssl/certs\r\n* TLSv1.3 (OUT), TLS handshake, Client hello (1):\r\n* TLSv1.3 (IN), TLS handshake, Server hello (2):\r\n* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):\r\n* TLSv1.3 (IN), TLS handshake, Certificate (11):\r\n* TLSv1.3 (IN), TLS handshake, CERT verify (15):\r\n* TLSv1.3 (IN), TLS handshake, Finished (20):\r\n* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):\r\n* TLSv1.3 (OUT), TLS handshake, Finished (20):\r\n* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384\r\n* ALPN, server accepted to use h2\r\n* Server certificate:\r\n* subject: CN=localhost.localstack.cloud\r\n* start date: Sep 6 00:00:00 2024 GMT\r\n* expire date: Dec 5 23:59:59 2024 GMT\r\n* subjectAltName: host \"0e0cf92f.execute-api.localhost.localstack.cloud\" matched cert's \"*.execute-api.localhost.localstack.cloud\"\r\n* issuer: C=AT; O=ZeroSSL; CN=ZeroSSL RSA Domain Secure Site CA\r\n* SSL certificate verify ok.\r\n* Using HTTP2, server supports multi-use\r\n* Connection state changed (HTTP/2 confirmed)\r\n* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0\r\n* Using Stream ID: 1 (easy handle 0x6550ac6c5650)\r\n> GET /local/accounts/health HTTP/2\r\n> Host: 0e0cf92f.execute-api.localhost.localstack.cloud:4566\r\n> user-agent: curl/7.68.0\r\n> accept: */*\r\n> \r\n* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):\r\n* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):\r\n* old SSL session ID is stale, removing\r\n* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!\r\n< HTTP/2 404 \r\n< server: TwistedWeb/24.3.0\r\n< date: Wed, 16 Oct 2024 05:29:09 GMT\r\n< content-type: application/json\r\n< content-length: 86\r\n< \r\n* Connection #0 to host 0e0cf92f.execute-api.localhost.localstack.cloud left intact\r\n{\"message\": \"The API id '0e0cf92f' does not correspond to a deployed API Gateway API\"}\r\n```\r\n\r\nBut the configurations are all the same as before:\r\n```\r\n$ awslocal apigatewayv2 get-apis\r\n{\r\n \"Items\": [\r\n {\r\n \"ApiEndpoint\": \"http://0e0cf92f.execute-api.localhost.localstack.cloud:4566\",\r\n \"ApiId\": \"0e0cf92f\",\r\n \"ApiKeySelectionExpression\": \"$request.header.x-api-key\",\r\n \"CorsConfiguration\": {\r\n \"AllowHeaders\": [\r\n \"*\"\r\n ],\r\n \"AllowMethods\": [\r\n \"*\"\r\n ],\r\n \"AllowOrigins\": [\r\n \"*\"\r\n ],\r\n \"ExposeHeaders\": [\r\n \"*\"\r\n ]\r\n },\r\n \"CreatedDate\": \"2024-10-16T05:24:49.452000+00:00\",\r\n \"DisableExecuteApiEndpoint\": false,\r\n \"Name\": \"XpedigoAPI_v2\",\r\n \"ProtocolType\": \"HTTP\",\r\n \"RouteSelectionExpression\": \"$request.method $request.path\",\r\n \"Tags\": {},\r\n \"Version\": \"2024-09-25 01:18:37UTC\"\r\n }\r\n ]\r\n}\r\n\r\n$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f\r\n{\r\n \"Items\": [\r\n {\r\n \"AutoDeployed\": false,\r\n \"CreatedDate\": \"2024-10-16T05:24:49.529068+00:00\",\r\n \"DeploymentId\": \"4d3d207f\",\r\n \"DeploymentStatus\": \"DEPLOYED\"\r\n }\r\n ]\r\n}\r\n\r\n$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f\r\n{\r\n \"Items\": [\r\n {\r\n \"CreatedDate\": \"2024-10-16T05:24:49.524619+00:00\",\r\n \"DefaultRouteSettings\": {\r\n \"DetailedMetricsEnabled\": false\r\n },\r\n \"DeploymentId\": \"4d3d207f\",\r\n \"LastUpdatedDate\": \"2024-10-16T05:24:49.524619+00:00\",\r\n \"RouteSettings\": {},\r\n \"StageName\": \"local\",\r\n \"StageVariables\": {\r\n \"baseurl\": \"alb-localstack-bdowson.ngrok.io\",\r\n \"env\": \"local\"\r\n },\r\n \"Tags\": {}\r\n }\r\n ]\r\n}\r\n```\r\n\r\n\r\n### Expected Behavior\r\n\r\nAPI gateway should work correctly even after a localstack container restart.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\ndocker-compose.yml:\r\n```\r\nlocalstack:\r\n container_name: localstack\r\n image: localstack/localstack-pro:latest\r\n ports:\r\n - 4566:4566\r\n - 4510-4559:4510-4559\r\n environment:\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n - DEBUG=1\r\n - PERSISTENCE=1\r\n - SNAPSHOT_LOAD_STRATEGY=ON_STARTUP\r\n - LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY}\r\n - PROVIDER_OVERRIDE_APIGATEWAY=next_gen\r\n networks:\r\n app_network:\r\n ipv4_address: 10.0.2.20\r\n volumes:\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n - \"/localstack-data:/var/lib/localstack\"\r\n```\r\n\r\n1. `docker-compose up localstack`\r\n2. Import API Gateway with `awslocal apigatewayv2 import-api --body file://t.json`\r\n3. Create stage with `awslocal apigatewayv2 create-stage --api-id 54ae753d --stage-name local --auto-deploy`\r\n4. Confirm it works with `curl -v https://[gateway url]/local/whatever`\r\n5. Stop localstack\r\n6. Run `docker-compose up localstack` again\r\n7. Try and curl the api again and you will get an error\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: Ubuntu 20.04.5 LTS\r\n- LocalStack: \r\n LocalStack version: 3.8.2.dev33\r\n LocalStack Docker image sha: localstack/localstack-pro@sha256:b533e1bcfbe8f5462483725276a0e7f8fbd9ded32b1be2dac5ec9cee5e822023\r\n LocalStack build date: 2024-10-15\r\n LocalStack build git hash: 318e1adc\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\nAfter this error appears, even if I delete the API and recreate it I still get the message `{\"message\": \"The API id 'xxxx' does not correspond to a deployed API Gateway API\"}`. The only way for me to resolve it is to delete my local locastack snapshot folder and rebuild everything.", "pr_html_url": "https://github.com/localstack/localstack/pull/11702", "file_loc": "{'base_commit': '8c9d9b0475247f667a0f184f2fbc6d66b955749f', 'files': [{'path': 'localstack-core/localstack/services/apigateway/next_gen/execute_api/router.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12]}, \"('ApiGatewayEndpoint', None, 34)\": {'mod': [41]}, \"('ApiGatewayEndpoint', '__init__', 41)\": {'mod': [44, 45, 46]}}}, {'path': 'localstack-core/localstack/services/apigateway/next_gen/provider.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [21]}, \"('ApigatewayNextGenProvider', '__init__', 46)\": {'mod': [50, 51]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["localstack-core/localstack/services/apigateway/next_gen/execute_api/router.py", "localstack-core/localstack/services/apigateway/next_gen/provider.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "d865e5213515cef6344f16f4c77386be9ce8f223", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/23814", "iss_label": "Performance\nCategorical\ngood first issue", "title": "equality comparison with a scalar is slow for category (performance regression)", "body": "Are the following 2 ways to compare a series to a scalar equivalent (ignore missing values)? I have to write the hard way in order to take advantage of the category properties.\r\n\r\n ```python\r\n x = pd.Series(list('abcd') * 1000000).astype('category')\r\n %timeit x == 'a'\r\n # 10 loops, best of 3: 25.2 ms per loop\r\n %timeit x.cat.codes == x.cat.categories.get_loc('a')\r\n # 1000 loops, best of 3: 750 \u00b5s per loop\r\n ```", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/23888", "file_loc": "{'base_commit': 'd865e5213515cef6344f16f4c77386be9ce8f223', 'files': [{'path': 'asv_bench/benchmarks/categoricals.py', 'status': 'modified', 'Loc': {\"('Constructor', 'setup', 33)\": {'add': [48]}, '(None, None, None)': {'add': [70]}}}, {'path': 'doc/source/whatsnew/v0.24.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1153]}}}, {'path': 'pandas/core/arrays/categorical.py', 'status': 'modified', 'Loc': {\"('Categorical', '__init__', 314)\": {'add': [349]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/arrays/categorical.py", "asv_bench/benchmarks/categoricals.py"], "doc": ["doc/source/whatsnew/v0.24.0.rst"], "test": [], "config": [], "asset": []}} @@ -84,14 +84,14 @@ {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "fe7043a648eac1e0ec0af772a21b283566ecd020", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3903", "iss_label": "enhancement", "title": "Can I get remote server's ip address via response?", "body": "Can I get remote server's ip address via response?\r\nFor some reason. I'll need get remote site's ip address when parsing response. I looked the document but found nothing.\r\nAny one know that?\r\nThanks!", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3940", "file_loc": "{'base_commit': 'fe7043a648eac1e0ec0af772a21b283566ecd020', 'files': [{'path': 'conftest.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'docs/topics/request-response.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [618, 707], 'mod': [39]}}}, {'path': 'scrapy/core/downloader/__init__.py', 'status': 'modified', 'Loc': {\"('Downloader', '_download', 160)\": {'mod': [176]}}}, {'path': 'scrapy/core/downloader/handlers/http11.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, \"('_ResponseReader', '__init__', 440)\": {'add': [451]}, \"('_ResponseReader', None, 438)\": {'add': [457]}, \"('ScrapyAgent', '_cb_bodyready', 373)\": {'mod': [376]}, \"('ScrapyAgent', '_cb_bodydone', 411)\": {'mod': [412, 413, 414, 415, 416, 417]}, \"('_ResponseReader', 'connectionLost', 483)\": {'mod': [489, 493, 498]}}}, {'path': 'scrapy/http/response/__init__.py', 'status': 'modified', 'Loc': {\"('Response', '__init__', 20)\": {'add': [27]}, \"('Response', None, 18)\": {'mod': [20]}, \"('Response', 'replace', 86)\": {'mod': [90]}}}, {'path': 'tests/mockserver.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 20, 226], 'mod': [9, 10, 13, 14, 16, 17, 241, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 253]}, \"('MockServer', None, 201)\": {'mod': [201]}, \"('MockServer', '__enter__', 203)\": {'mod': [204, 206]}}}, {'path': 'tests/test_crawl.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, \"('CrawlTestCase', 'test_response_ssl_certificate_empty_response', 431)\": {'add': [438]}}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {\"('CrawlerProcessSubprocess', None, 277)\": {'add': [287], 'mod': [277, 278]}, \"('CrawlerProcessSubprocess', 'test_reactor_asyncio', 331)\": {'add': [334]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": ["scrapy/http/response/__init__.py", "scrapy/core/downloader/handlers/http11.py", "scrapy/core/downloader/__init__.py", "tests/mockserver.py", "conftest.py"], "doc": ["docs/topics/request-response.rst"], "test": ["tests/test_crawler.py", "tests/test_crawl.py"], "config": [], "asset": []}} {"organization": "psf", "repo_name": "requests", "base_commit": "7eaa5ee37f2ef0fb37dc6e9efbead726665810b4", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/3659", "iss_label": "", "title": "URL proxy auth with empty passwords doesn't emit auth header.", "body": "I'm using a proxy that requires authentication to send request that receives 302 response with Location header. I would like python.requests to follow this redirect and make request via proxy with specified credentials. But it seems like this doesn't happen, if I provide credentials in HTTPProxyAuth they will work ok for 200 responses but will fail for 302. See below code sample:\r\n\r\n```python\r\n\r\nimport requests\r\nfrom requests.auth import HTTPProxyAuth\r\n\r\nsess = requests.Session()\r\nurl1 = 'http://httpbin.org/'\r\nurl2 = 'http://httpbin.org/redirect/2'\r\nauth = HTTPProxyAuth('frank', 'hunter2')\r\nproxies = {\r\n \"http\": \"http://localhost:9000\"\r\n}\r\nresponse1 = sess.get(url1, proxies=proxies, auth=auth)\r\nresponse1.raise_for_status()\r\nresponse2 = sess.get(url2, proxies=proxies, auth=auth)\r\nresponse2.raise_for_status()\r\n```\r\nNow launch MITM proxy on localhost\r\n\r\n```\r\n> mitmproxy -p 9000 --singleuser=frank:hunter2\r\n```\r\n\r\nThis fails with 407 for me, and proxy logs only two requests\r\n\r\n```\r\n response2.raise_for_status()\r\n File \"----------\", line 862, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 407 Client Error: Proxy Authentication Required for url: http://httpbin.org/relative-redirect/1\r\n```\r\n\r\n```\r\n>> GET http://httpbin.org/\r\n \u2190 200 text/html 11.87kB 3.57MB/s\r\n GET http://httpbin.org/redirect/2\r\n \u2190 302 text/html 247B 76.59kB/s\r\n\r\n```\r\nit does not log request to `Location`. \r\n\r\nI see that putting credentials in proxies dictionary somehow fixes this issue when I use MITM proxy but it doesn't fix it for my production proxy (can't share code or proxy details here, need to check closer why it doesn't work for my proxy). I guess some details in setup of proxies might vary.\r\n\r\nIs this a bug? I see some issues for proxy auth but they are mostly about HTTPS, not sure if someone reported this thing I describe here. Should this be fixed?\r\n\r\nEDIT:\r\n\r\nIt looks like this always fails if proxy password is empty string.\r\n\r\nchange auth to \r\n\r\n```python\r\nauth = HTTPProxyAuth('frank', '')\r\n\r\nproxies = {\r\n \"http\": \"http://frank:@localhost:9000\"\r\n}\r\n```\r\n\r\nwill now always fail on redirect.\r\n\r\n```python\r\nauth = HTTPProxyAuth('frank', 'hunter2')\r\nproxies = {\r\n \"http\": \"http://frank:hunter2@localhost:9000\"\r\n}\r\n```\r\nworks fine on redirects, but seems somewhat duplicated.\r\n\r\nI noticed this on Ubuntu 14.04, requests 2.11.1, python 2.7.6, mitmproxy 0.10.1", "pr_html_url": "https://github.com/psf/requests/pull/3660", "file_loc": "{'base_commit': '7eaa5ee37f2ef0fb37dc6e9efbead726665810b4', 'files': [{'path': 'requests/adapters.py', 'status': 'modified', 'Loc': {\"('HTTPAdapter', 'proxy_headers', 353)\": {'mod': [369]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {\"('TestRequests', None, 55)\": {'add': [1474]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/adapters.py"], "doc": [], "test": ["tests/test_requests.py"], "config": [], "asset": []}} {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "923ac2bdee409e4fa8c47414b07f52e036bb21bc", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/25828", "iss_label": "Docs\ngood first issue", "title": "Use Substitution Decorator for CustomBusinessMonthEnd", "body": "This is a follow up to https://github.com/pandas-dev/pandas/pull/21093/files#r188805397 which wasn't working with Py27. Now that that is a thing of the past we should be able to use the more idiomatic Substitution approach to generating this docstring\r\n\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/25868", "file_loc": "{'base_commit': '923ac2bdee409e4fa8c47414b07f52e036bb21bc', 'files': [{'path': 'pandas/tseries/offsets.py', 'status': 'modified', 'Loc': {\"('_CustomBusinessMonth', None, 972)\": {'add': [979, 987, 988], 'mod': [974, 975, 981, 983, 985, 986]}, '(None, None, None)': {'add': [1054, 1061], 'mod': [18]}, \"('CustomBusinessMonthEnd', None, 1055)\": {'mod': [1056, 1057, 1058]}, \"('CustomBusinessMonthBegin', None, 1062)\": {'mod': [1063, 1064, 1065, 1066]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/tseries/offsets.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "59a240cd311f5cedbcd5e12421f1d3bd596d9070", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/71254", "iss_label": "easyfix\nsupport:core\ndocs\naffects_2.11", "title": "Files contain broken references 404", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\nFiles contain broken references (return 404):\r\n\r\n- [ ] docs/docsite/rst/user_guide/collections_using.rst https://docs.ansible.com/collections/\r\n\r\n- [x] docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_requirements.rst https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_host_config_manager.py~\r\n\r\n- [x] docs/docsite/rst/dev_guide/testing_units.rst https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py\r\n\r\n- [x] docs/docsite/rst/porting_guides/porting_guide_base_2.11.rst\r\nhttps://github.com/ansible/ansible/blob/stable-2.11/changelogs/CHANGELOG-v2.11.rst\r\n\r\n- [x] docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_external_doc_links.rst https://github.com/vmware/pyvmomi/tree/master/docs\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.in\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.ini\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/cobbler.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_infoblox.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_infoblox.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.yaml\r\n\r\n- [ ] docs/docsite/rst/scenario_guides/guide_packet.rst https://support.packet.com/kb/articles/user-data\r\n\r\n\r\n##### ISSUE TYPE\r\n- Documentation Report\r\n\r\n##### ANSIBLE VERSION\r\n```\r\ndevel\r\n```", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/71705", "commit_html_url": null, "file_loc": "{'base_commit': '59a240cd311f5cedbcd5e12421f1d3bd596d9070', 'files': [{'path': 'docs/docsite/rst/scenario_guides/guide_packet.rst', 'status': 'modified', 'Loc': {'(None, None, 126)': {'mod': [126]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docs/docsite/rst/scenario_guides/guide_packet.rst"], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "59a240cd311f5cedbcd5e12421f1d3bd596d9070", "iss_html_url": "https://github.com/ansible/ansible/issues/71254", "iss_label": "easyfix\nsupport:core\ndocs\naffects_2.11", "title": "Files contain broken references 404", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\nFiles contain broken references (return 404):\r\n\r\n- [ ] docs/docsite/rst/user_guide/collections_using.rst https://docs.ansible.com/collections/\r\n\r\n- [x] docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_requirements.rst https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_host_config_manager.py~\r\n\r\n- [x] docs/docsite/rst/dev_guide/testing_units.rst https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py\r\n\r\n- [x] docs/docsite/rst/porting_guides/porting_guide_base_2.11.rst\r\nhttps://github.com/ansible/ansible/blob/stable-2.11/changelogs/CHANGELOG-v2.11.rst\r\n\r\n- [x] docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_external_doc_links.rst https://github.com/vmware/pyvmomi/tree/master/docs\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.in\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.ini\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py\r\n\r\n- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/cobbler.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_infoblox.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.py\r\n\r\n- [x] docs/docsite/rst/scenario_guides/guide_infoblox.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.yaml\r\n\r\n- [ ] docs/docsite/rst/scenario_guides/guide_packet.rst https://support.packet.com/kb/articles/user-data\r\n\r\n\r\n##### ISSUE TYPE\r\n- Documentation Report\r\n\r\n##### ANSIBLE VERSION\r\n```\r\ndevel\r\n```", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/71705", "commit_html_url": null, "file_loc": "{'base_commit': '59a240cd311f5cedbcd5e12421f1d3bd596d9070', 'files': [{'path': 'docs/docsite/rst/scenario_guides/guide_packet.rst', 'status': 'modified', 'Loc': {'(None, None, 126)': {'mod': [126]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["docs/docsite/rst/scenario_guides/guide_packet.rst"], "test": [], "config": [], "asset": []}} {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "f8464b4f66e627ed2778c9a27dbe4a8642482baf", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2226", "iss_label": "bug", "title": "Yolov5 crashes with RTSP stream analysis", "body": "## \ud83d\udc1b Bug\r\n\r\nIf I want to analyze an rtsp stream with Yolov5 in a docker container, regardless the latest or the v4.0 version, it crashes.\r\n\r\n## To Reproduce (REQUIRED)\r\n\r\nInput:\r\n```\r\ndocker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server\r\n\r\nffmpeg -i video.mp4 -s 640x480 -c:v libx264 -f rtsp -rtsp_transport tcp rtsp://localhost:8554/analysis\r\n\r\ndocker run -it ultralytics/yolov5:latest\r\n\r\npython3 detect.py --source rtsp://host.docker.internal:8554/analysis --weights yolov5s.pt --conf 0.25 --save-txt\r\n```\r\n\r\nOutput:\r\n```\r\nNamespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=True, source='rtsp://host.docker.internal:8554/analysis', update=False, view_img=False, weights=['yolov5s.pt'])\r\n/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:100.)\r\n return torch._C._cuda_getDeviceCount() > 0\r\nYOLOv5 v4.0-80-gf8464b4 torch 1.8.0a0+1606899 CPU\r\n\r\nFusing layers...\r\nModel Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPS\r\n[h264 @ 0x55e674656100] co located POCs unavailable\r\n[h264 @ 0x55e674656100] mmco: unref short failure\r\n[h264 @ 0x55e675117cc0] co located POCs unavailable\r\n[h264 @ 0x55e674dbb300] mmco: unref short failure\r\n[h264 @ 0x55e674ec09c0] co located POCs unavailable\r\n1/1: rtsp://host.docker.internal:8554/analysis... success (640x480 at 30.00 FPS).\r\n\r\n0: 480x640 13 persons, 1 tennis racket, Done. (2.089s)\r\nqt.qpa.xcb: could not connect to display\r\nqt.qpa.plugin: Could not load the Qt platform plugin \"xcb\" in \"/opt/conda/lib/python3.8/site-packages/cv2/qt/plugins\" even though it was found.\r\nThis application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.\r\n\r\nAvailable platform plugins are: xcb.\r\n\r\nAborted\r\n```\r\n\r\n\r\n## Expected behavior\r\n\r\nDoing the analysis\r\n\r\n## Environment\r\n\r\n - OS: Yolov5 docker container on macos Catalina\r\n - GPU none\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2231", "file_loc": "{'base_commit': 'f8464b4f66e627ed2778c9a27dbe4a8642482baf', 'files': [{'path': 'detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12, 13]}, \"(None, 'detect', 18)\": {'mod': [48, 121]}}}, {'path': 'utils/general.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [97]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["utils/general.py", "detect.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "8fcdf3b60b2930a4273cab4e3df22b77680ff41d", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/6515", "iss_label": "bug", "title": "GPU Memory Leak on Loading Pre-Trained Checkpoint", "body": "### Search before asking\r\n\r\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.\r\n\r\n\r\n### YOLOv5 Component\r\n\r\nTraining\r\n\r\n### Bug\r\n\r\nTraining YOLO from a checkpoint (*.pt) consumes more GPU memory than training from a pre-trained weight (i.e. yolov5l).\r\n\r\n### Environment\r\n\r\n- YOLO: YOLOv5 (latest; how to check the yolo version?)\r\n- CUDA: 11.6 (Tesla T4, 15360MiB)\r\n- OS: Ubuntu 18.04.6 LTS (Bionic Beaver)\r\n- Python: 3.8.12\r\n\r\n### Minimal Reproducible Example\r\n\r\nIn the below training command, case 2 requires more GPU memory than case 1.\r\n```\r\n# 1. train from pre-trained model\r\ntrain.py ... --weights yolov5l\r\n\r\n# 2. train from pre-trained checkpoint\r\ntrain.py ... --weights pre_trained_checkpoint.pt\r\n```\r\n\r\n### Additional\r\n\r\nAs reported on the pytorch forum[1], loading state dict on CUDA device causes memory leak. We should load it on CPU memory:\r\n\r\n```python\r\nstate_dict = torch.load(directory, map_location=lambda storage, loc: storage)\r\n```\r\n\r\n- [1] https://discuss.pytorch.org/t/load-state-dict-causes-memory-leak/36189/5?u=bilzrd\r\n\r\n### Are you willing to submit a PR?\r\n\r\n- [X] Yes I'd like to help by submitting a PR!", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/6516", "file_loc": "{'base_commit': '8fcdf3b60b2930a4273cab4e3df22b77680ff41d', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {\"(None, 'train', 65)\": {'mod': [123]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "2a9297f2444f912c354168c6c0df1c782edace0e", "iss_has_pr": 1, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1189", "iss_label": "bug", "title": "Sites Giving 404 error or no profile", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n- [x] I'm reporting a bug in Sherlock's functionality\r\n- [ ] The bug I'm reporting is not a false positive or a false negative\r\n- [ ] I've verified that I'm running the latest version of Sherlock\r\n- [ ] I've checked for similar bug reports including closed ones\r\n- [ ] I've checked for pull requests that attempt to fix this bug\r\n\r\n## Description\r\n\r\n\r\nThere are some sites which comes in result of matched Usernames but tends to give No Profile Page or a 404 Error,\r\nthose sites are below..\r\n\r\n[+] Anilist: https://anilist.co/user/\r\n[+] Coil: https://coil.com/u/\r\n[+] RuneScape: https://apps.runescape.com/runemetrics/app/overview/player/\r\n[+] TrackmaniaLadder: http://en.tm-ladder.com/_rech.php\r\n[+] babyblogRU: https://www.babyblog.ru/user/info", "pr_html_url": "https://github.com/sherlock-project/sherlock/pull/1192", "file_loc": "{'base_commit': '2a9297f2444f912c354168c6c0df1c782edace0e', 'files': [{'path': 'removed_sites.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1255]}}}, {'path': 'sherlock/resources/data.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [68, 69, 70, 71, 72, 73, 74, 75, 387, 388, 389, 390, 391, 392, 393, 394]}}}, {'path': 'sites.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [106], 'mod': [1, 11, 52]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sherlock/resources/data.json"], "doc": ["removed_sites.md", "sites.md"], "test": [], "config": [], "asset": []}} {"organization": "home-assistant", "repo_name": "core", "base_commit": "9e41a37284b8796bf3a190fe4bd2a4aee8616ec2", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/55095", "iss_label": "integration: honeywell", "title": "Rate limiting in Honeywell TCC", "body": "### The problem\r\n\r\nMultiple Honeywell TCC users are reporting rate limit errors in #53981. Restarting HomeAssistant seems to temporarily clear it up\r\n\r\n### What is version of Home Assistant Core has the issue?\r\n\r\n2021.8.8\r\n\r\n### What was the last working version of Home Assistant Core?\r\n\r\n_No response_\r\n\r\n### What type of installation are you running?\r\n\r\nHome Assistant Container\r\n\r\n### Integration causing the issue\r\n\r\nHoneywell Total Connect Comfort (US)\r\n\r\n### Link to integration documentation on our website\r\n\r\nhttps://www.home-assistant.io/integrations/honeywell\r\n\r\n### Example YAML snippet\r\n\r\n_No response_\r\n\r\n### Anything in the logs that might be useful for us?\r\n\r\n```txt\r\n2021-08-23 11:08:44 ERROR (MainThread) [homeassistant.helpers.entity] Update for climate.downstairs fails\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/components/honeywell/__init__.py\", line 113, in update\r\n await self._hass.async_add_executor_job(device.refresh)\r\n File \"/usr/local/lib/python3.9/concurrent/futures/thread.py\", line 52, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/somecomfort/client.py\", line 87, in refresh\r\n data = self._client._get_thermostat_data(self.deviceid)\r\n File \"/usr/local/lib/python3.9/site-packages/somecomfort/client.py\", line 468, in _get_thermostat_data\r\n return self._get_json(url)\r\n File \"/usr/local/lib/python3.9/site-packages/somecomfort/client.py\", line 444, in _get_json\r\n return self._request_json('get', *args, **kwargs)\r\n File \"/usr/local/lib/python3.9/site-packages/somecomfort/client.py\", line 436, in _request_json\r\n raise APIRateLimited()\r\nsomecomfort.client.APIRateLimited: You are being rate-limited. Try waiting a bit.\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 446, in async_update_ha_state\r\n await self.async_device_update()\r\n File \"/usr/src/homeassistant/homeassistant/helpers/entity.py\", line 654, in async_device_update\r\n raise exc\r\n File \"/usr/src/homeassistant/homeassistant/components/honeywell/climate.py\", line 385, in async_update\r\n await self._data.update()\r\n File \"/usr/src/homeassistant/homeassistant/components/honeywell/__init__.py\", line 124, in update\r\n result = await self._hass.async_add_executor_job(self._retry())\r\n File \"/usr/local/lib/python3.9/concurrent/futures/thread.py\", line 52, in run\r\n result = self.fn(*self.args, **self.kwargs)\r\nTypeError: 'coroutine' object is not callable\r\n```\r\n```\r\n\r\n\r\n### Additional information\r\n\r\n_No response_", "pr_html_url": "https://github.com/home-assistant/core/pull/55304", "file_loc": "{'base_commit': '9e41a37284b8796bf3a190fe4bd2a4aee8616ec2', 'files': [{'path': 'homeassistant/components/honeywell/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1], 'mod': [12]}, \"(None, 'async_setup_entry', 16)\": {'mod': [45]}, \"('HoneywellData', None, 68)\": {'mod': [105, 111]}, \"('HoneywellData', '_refresh_devices', 105)\": {'mod': [108]}, \"('HoneywellData', 'update', 111)\": {'mod': [116, 127]}}}, {'path': 'homeassistant/components/honeywell/climate.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [109]}, \"('HoneywellUSThermostat', 'async_update', 385)\": {'mod': [387]}}}, {'path': 'tests/components/honeywell/test_init.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 8, 17]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["homeassistant/components/honeywell/__init__.py", "homeassistant/components/honeywell/climate.py"], "doc": [], "test": ["tests/components/honeywell/test_init.py"], "config": [], "asset": []}} {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "f542c58a48e87878028b7639a3c0296bdb351071", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/3", "iss_label": "dev\nadvuser", "title": "Improve command line usage", "body": "Adding a command line args parsing with an help would be great !\r\n\r\nPreferably with `argparse`", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/13", "file_loc": "{'base_commit': 'f542c58a48e87878028b7639a3c0296bdb351071', 'files': [{'path': 'extract.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [1, 4, 6, 7, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 63, 64, 65, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 78]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 4]}, \"('FullPaths', None, 10)\": {'mod': [10, 11, 12, 13]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["extract.py", "lib/utils.py", "lib/faces_detect.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "xai-org", "repo_name": "grok-1", "base_commit": "e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8", "iss_has_pr": 1, "iss_html_url": "https://github.com/xai-org/grok-1/issues/14", "iss_label": "", "title": "Grok implementation details", "body": "not an issue but would be nice if it was in the readme/model.py header:\r\n314B parameters\r\nMixture of 8 Experts\r\n2 experts used per token\r\n64 layers\r\n48 attention heads for queries\r\n8 attention heads for keys/values\r\nembeddings size: 6,144\r\nrotary embeddings (RoPE)\r\nSentencePiece tokenizer; 131,072 tokens\r\nSupports activation sharding and 8-bit quantization\r\nMax seq length (context): 8,192 tokens", "pr_html_url": "https://github.com/xai-org/grok-1/pull/27", "file_loc": "{'base_commit': 'e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "a63524684d02131aef4f2e9d2cea7bfe210abc96", "is_iss": 0, "iss_html_url": "https://github.com/pytorch/pytorch/issues/84408", "iss_label": "module: onnx\ntriaged\ntopic: bug fixes", "title": "Exporting the operator ::col2im to ONNX opset version 11 is not supported", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWhen I converted the model in \u201c.pt\u201d format to onnx format, I received an error that the operator col2im is not supported.\r\n\r\n## code\r\n\r\n import torch\r\n from cvnets import get_model\r\n from options.opts import get_segmentation_eval_arguments\r\n \r\n def pt2onnx():\r\n opts = get_segmentation_eval_arguments()\r\n model = get_model(opts)\r\n model.eval()\r\n onnx_save_path = \"model/mobilevit.onnx\"\r\n in_data = torch.randn(1, 3, 512, 512)\r\n torch.onnx.export(model, \r\n in_data, \r\n onnx_save_path, \r\n opset_version=11, \r\n do_constant_folding=True, \r\n input_names=[\"in\"],\r\n output_names=[\"out\"])\r\n return\r\n\r\n if __name__ == '__main__':\r\n pt2onnx()\r\n\r\n## error\r\nTraceback (most recent call last):\r\n File \"/home/sunseeker/project/robot_seg/code/mobilevit_seg/demo.py\", line 20, in \r\n pt2onnx()\r\n File \"/home/sunseeker/project/robot_seg/code/mobilevit_seg/demo.py\", line 13, in pt2onnx\r\n torch.onnx.export(model, in_data, onnx_save_path, opset_version=11, do_constant_folding=True, input_names=[\"in\"],\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/__init__.py\", line 350, in export\r\n return utils.export(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 163, in export\r\n _export(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 1074, in _export\r\n graph, params_dict, torch_out = _model_to_graph(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 731, in _model_to_graph\r\n graph = _optimize_graph(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 308, in _optimize_graph\r\n graph = _C._jit_pass_onnx(graph, operator_export_type)\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/__init__.py\", line 416, in _run_symbolic_function\r\n return utils._run_symbolic_function(*args, **kwargs)\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 1421, in _run_symbolic_function\r\n raise symbolic_registry.UnsupportedOperatorError(\r\n**torch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::col2im to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.**\r\n\r\n## ENV\r\nPyTorch version: 1.12.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.1 LTS (x86_64)\r\nGCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0\r\nClang version: Could not collect\r\nCMake version: version 3.22.1\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-47-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050\r\nNvidia driver version: 510.85.02\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.23.1\r\n[pip3] pytorchvideo==0.1.5\r\n[pip3] torch==1.12.1\r\n[pip3] torchaudio==0.12.1\r\n[pip3] torchvision==0.13.1\r\n[conda] blas 1.0 mkl \r\n[conda] cudatoolkit 10.2.89 hfd86e86_1 \r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] mkl 2021.4.0 h06a4308_640 \r\n[conda] mkl-service 2.4.0 py310h7f8727e_0 \r\n[conda] mkl_fft 1.3.1 py310hd6ae3a3_0 \r\n[conda] mkl_random 1.2.2 py310h00e6091_0 \r\n[conda] numpy 1.23.1 py310h1794996_0 \r\n[conda] numpy-base 1.23.1 py310hcba007f_0 \r\n[conda] pytorch 1.12.1 py3.10_cuda10.2_cudnn7.6.5_0 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] pytorchvideo 0.1.5 pypi_0 pypi\r\n[conda] torchaudio 0.12.1 py310_cu102 pytorch\r\n[conda] torchvision 0.13.1 py310_cu102 pytorch\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/a63524684d02131aef4f2e9d2cea7bfe210abc96", "file_loc": "{'base_commit': 'a63524684d02131aef4f2e9d2cea7bfe210abc96', 'files': [{'path': 'test/onnx/test_pytorch_onnx_no_runtime.py', 'status': 'modified', 'Loc': {\"('TestONNXExport', None, 79)\": {'add': [1158]}}}, {'path': 'test/onnx/test_pytorch_onnx_onnxruntime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [47]}}}, {'path': 'torch/csrc/jit/serialization/export.cpp', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [84], 'mod': [62]}}}, {'path': 'torch/onnx/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27, 64]}}}, {'path': 'torch/onnx/_constants.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [7]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["torch/onnx/_constants.py", "torch/onnx/__init__.py", "torch/csrc/jit/serialization/export.cpp"], "doc": [], "test": ["test/onnx/test_pytorch_onnx_onnxruntime.py", "test/onnx/test_pytorch_onnx_no_runtime.py"], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "a63524684d02131aef4f2e9d2cea7bfe210abc96", "iss_html_url": "https://github.com/pytorch/pytorch/issues/84408", "iss_label": "module: onnx\ntriaged\ntopic: bug fixes", "title": "Exporting the operator ::col2im to ONNX opset version 11 is not supported", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nWhen I converted the model in \u201c.pt\u201d format to onnx format, I received an error that the operator col2im is not supported.\r\n\r\n## code\r\n\r\n import torch\r\n from cvnets import get_model\r\n from options.opts import get_segmentation_eval_arguments\r\n \r\n def pt2onnx():\r\n opts = get_segmentation_eval_arguments()\r\n model = get_model(opts)\r\n model.eval()\r\n onnx_save_path = \"model/mobilevit.onnx\"\r\n in_data = torch.randn(1, 3, 512, 512)\r\n torch.onnx.export(model, \r\n in_data, \r\n onnx_save_path, \r\n opset_version=11, \r\n do_constant_folding=True, \r\n input_names=[\"in\"],\r\n output_names=[\"out\"])\r\n return\r\n\r\n if __name__ == '__main__':\r\n pt2onnx()\r\n\r\n## error\r\nTraceback (most recent call last):\r\n File \"/home/sunseeker/project/robot_seg/code/mobilevit_seg/demo.py\", line 20, in \r\n pt2onnx()\r\n File \"/home/sunseeker/project/robot_seg/code/mobilevit_seg/demo.py\", line 13, in pt2onnx\r\n torch.onnx.export(model, in_data, onnx_save_path, opset_version=11, do_constant_folding=True, input_names=[\"in\"],\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/__init__.py\", line 350, in export\r\n return utils.export(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 163, in export\r\n _export(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 1074, in _export\r\n graph, params_dict, torch_out = _model_to_graph(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 731, in _model_to_graph\r\n graph = _optimize_graph(\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 308, in _optimize_graph\r\n graph = _C._jit_pass_onnx(graph, operator_export_type)\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/__init__.py\", line 416, in _run_symbolic_function\r\n return utils._run_symbolic_function(*args, **kwargs)\r\n File \"/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py\", line 1421, in _run_symbolic_function\r\n raise symbolic_registry.UnsupportedOperatorError(\r\n**torch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::col2im to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.**\r\n\r\n## ENV\r\nPyTorch version: 1.12.1\r\nIs debug build: False\r\nCUDA used to build PyTorch: 10.2\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 22.04.1 LTS (x86_64)\r\nGCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0\r\nClang version: Could not collect\r\nCMake version: version 3.22.1\r\nLibc version: glibc-2.35\r\n\r\nPython version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-47-generic-x86_64-with-glibc2.35\r\nIs CUDA available: True\r\nCUDA runtime version: Could not collect\r\nGPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050\r\nNvidia driver version: 510.85.02\r\ncuDNN version: Could not collect\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nVersions of relevant libraries:\r\n[pip3] mypy-extensions==0.4.3\r\n[pip3] numpy==1.23.1\r\n[pip3] pytorchvideo==0.1.5\r\n[pip3] torch==1.12.1\r\n[pip3] torchaudio==0.12.1\r\n[pip3] torchvision==0.13.1\r\n[conda] blas 1.0 mkl \r\n[conda] cudatoolkit 10.2.89 hfd86e86_1 \r\n[conda] ffmpeg 4.3 hf484d3e_0 pytorch\r\n[conda] mkl 2021.4.0 h06a4308_640 \r\n[conda] mkl-service 2.4.0 py310h7f8727e_0 \r\n[conda] mkl_fft 1.3.1 py310hd6ae3a3_0 \r\n[conda] mkl_random 1.2.2 py310h00e6091_0 \r\n[conda] numpy 1.23.1 py310h1794996_0 \r\n[conda] numpy-base 1.23.1 py310hcba007f_0 \r\n[conda] pytorch 1.12.1 py3.10_cuda10.2_cudnn7.6.5_0 pytorch\r\n[conda] pytorch-mutex 1.0 cuda pytorch\r\n[conda] pytorchvideo 0.1.5 pypi_0 pypi\r\n[conda] torchaudio 0.12.1 py310_cu102 pytorch\r\n[conda] torchvision 0.13.1 py310_cu102 pytorch\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/a63524684d02131aef4f2e9d2cea7bfe210abc96", "file_loc": "{'base_commit': 'a63524684d02131aef4f2e9d2cea7bfe210abc96', 'files': [{'path': 'test/onnx/test_pytorch_onnx_no_runtime.py', 'status': 'modified', 'Loc': {\"('TestONNXExport', None, 79)\": {'add': [1158]}}}, {'path': 'test/onnx/test_pytorch_onnx_onnxruntime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [47]}}}, {'path': 'torch/csrc/jit/serialization/export.cpp', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [84], 'mod': [62]}}}, {'path': 'torch/onnx/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27, 64]}}}, {'path': 'torch/onnx/_constants.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [7]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["torch/onnx/_constants.py", "torch/onnx/__init__.py", "torch/csrc/jit/serialization/export.cpp"], "doc": [], "test": ["test/onnx/test_pytorch_onnx_onnxruntime.py", "test/onnx/test_pytorch_onnx_no_runtime.py"], "config": [], "asset": []}} {"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "64a46031b9c22e2a0526d0216eef627a91da880d", "iss_has_pr": 1, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/384", "iss_label": "", "title": "install error", "body": "Traceback (most recent call last):\r\n File \"/usr/share/hackingtool/hackingtool.py\", line 106, in \r\n os.mkdir(archive)\r\nFileNotFoundError: [Errno 2] No such file or directory: ''\r\n \r\n \r\n and i was in root mode also but this showing what to do help", "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/387", "file_loc": "{'base_commit': '64a46031b9c22e2a0526d0216eef627a91da880d', 'files': [{'path': 'hackingtool.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [105, 106]}}}, {'path': 'tools/others/socialmedia.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, \"('Faceshell', 'run', 48)\": {'mod': [51]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["hackingtool.py", "tools/others/socialmedia.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "b754525e99ca62424c484fe529b6142f6bab939e", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/5160", "iss_label": "bug\nStale", "title": "Docker Multi-GPU DDP training hang on `destroy_process_group()` with `wandb` option 3", "body": "Hello, when I try to training using multi gpu based on docker file images. I got the below error. I use Ubuntu 18.04, python 3.8.\r\n<<<<<<<<<<<<<<<<>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>\r\n\r\n```\r\nroot@5a70a5f2d489:/usr/src/app# python -m torch.distributed.run --nproc_per_node 2 train.py --batch 64 --data data.yaml --weights yolov5s.pt --device 0,1\r\nWARNING:__main__:*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed. \r\n*****************************************\r\nTraceback (most recent call last):\r\n File \"train.py\", line 620, in \r\n main(opt)\r\n File \"train.py\", line 497, in main\r\n check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks\r\n File \"/usr/src/app/utils/general.py\", line 326, in check_file\r\n assert len(files), f'File not found: {file}' # assert file was found\r\nAssertionError: File not found: data.yaml\r\nwandb: (1) Create a W&B account\r\nwandb: (2) Use an existing W&B account\r\nwandb: (3) Don't visualize my results\r\nwandb: Enter your choice: (30 second timeout) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 405) of binary: /opt/conda/bin/python\r\n/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py:367: UserWarning: \r\n\r\n**********************************************************************\r\n CHILD PROCESS FAILED WITH NO ERROR_FILE \r\n**********************************************************************\r\nCHILD PROCESS FAILED WITH NO ERROR_FILE\r\nChild process 405 (local_rank 1) FAILED (exitcode 1)\r\nError msg: Process failed with exitcode 1\r\nWithout writing an error file to .\r\nWhile this DOES NOT affect the correctness of your application,\r\nno trace information about the error will be available for inspection.\r\nConsider decorating your top level entrypoint function with\r\ntorch.distributed.elastic.multiprocessing.errors.record. Example:\r\n\r\n from torch.distributed.elastic.multiprocessing.errors import record\r\n\r\n @record\r\n def trainer_main(args):\r\n # do train\r\n**********************************************************************\r\n warnings.warn(_no_error_file_warning_msg(rank, failure))\r\nTraceback (most recent call last):\r\n File \"/opt/conda/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/opt/conda/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py\", line 702, in \r\n main()\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 361, in wrapper\r\n return f(*args, **kwargs)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py\", line 698, in main\r\n run(args)\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py\", line 689, in run\r\n elastic_launch(\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 116, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n File \"/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py\", line 244, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError: \r\n***************************************\r\n train.py FAILED \r\n=======================================\r\nRoot Cause:\r\n[0]:\r\n time: 2021-10-13_04:30:25\r\n rank: 1 (local_rank: 1)\r\n exitcode: 1 (pid: 405)\r\n error_file: \r\n msg: \"Process failed with exitcode 1\"\r\n=======================================\r\nOther Failures:\r\n \r\n***************************************\r\n\r\nroot@5a70a5f2d489:/usr/src/app#\r\n\r\n```", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/5163", "file_loc": "{'base_commit': 'b754525e99ca62424c484fe529b6142f6bab939e', 'files': [{'path': 'utils/loggers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 8, 17, 22]}}}, {'path': 'utils/loggers/wandb/wandb_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24, 25, 27, 28, 29, 30, 31]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["utils/loggers/wandb/wandb_utils.py", "utils/loggers/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "4c77f62f806567644571b6b3f496f7b332b12327", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/656", "iss_label": "", "title": "Remove unnecessary configs such as: tdd, tdd_plus, clarify, respec", "body": "If we have time: benchmark them and store insights before deletion", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/737", "file_loc": "{'base_commit': '4c77f62f806567644571b6b3f496f7b332b12327', 'files': [{'path': 'gpt_engineer/preprompts/fix_code', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/preprompts/spec', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/preprompts/unit_tests', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/steps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [60, 395, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438]}, \"(None, 'gen_spec', 121)\": {'mod': [121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 133, 135, 138, 139, 140, 141, 142, 143, 144, 145, 146, 148, 150, 151, 153]}, \"(None, 'gen_code_after_unit_tests', 175)\": {'mod': [175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189]}, \"(None, 'fix_code', 354)\": {'mod': [354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367]}, \"('Config', None, 378)\": {'mod': [383, 384]}}}, {'path': 'tests/test_collect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [11, 12]}, \"(None, 'test_collect_learnings', 15)\": {'mod': [21, 30, 31, 32, 33, 34]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt_engineer/steps.py"], "doc": [], "test": ["tests/test_collect.py"], "config": [], "asset": ["gpt_engineer/preprompts/unit_tests", "gpt_engineer/preprompts/fix_code", "gpt_engineer/preprompts/spec"]}} @@ -104,8 +104,8 @@ {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "1e95337f3aec4c12244802bb6e493b07b27aa795", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/459", "iss_label": "bug", "title": "custom anchors get flushed when loading pretrain weights", "body": "Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following, otherwise it is non-actionable, and we can not help you:\r\n - **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo\r\n - **Common dataset**: coco.yaml or coco128.yaml\r\n - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#reproduce-our-environment\r\n \r\nIf this is a custom dataset/training question you **must include** your `train*.jpg`, `test*.jpg` and `results.png` figures, or we can not help you. You can generate these with `utils.plot_results()`.\r\n\r\n\r\n## \ud83d\udc1b Bug\r\nin train.py , the anchors set by user in yaml file are flushed by pretrain weights.\r\n```\r\n\r\n if weights.endswith('.pt'): # pytorch format\r\n ckpt = torch.load(weights, map_location=device) # load checkpoint\r\n\r\n # load model\r\n try:\r\n ckpt['model'] = {k: v for k, v in ckpt['model'].float().state_dict().items()\r\n if model.state_dict()[k].shape == v.shape} # to FP32, filter\r\n #print(ckpt['model'].keys())\r\n **#ckpt['model'].pop('model.27.anchors') \r\n #ckpt['model'].pop('model.27.anchor_grid')**\r\n \r\n model.load_state_dict(ckpt['model'], strict=False)\r\n except KeyError as e:\r\n s = \"%s is not compatible with %s. This may be due to model differences or %s may be out of date. \" \\\r\n \"Please delete or update %s and try again, or use --weights '' to train from scratch.\" \\\r\n % (opt.weights, opt.cfg, opt.weights, opt.weights)\r\n raise KeyError(s) from e\r\n\r\n```\r\n## To Reproduce (REQUIRED)\r\n\r\nInput:\r\nin ./model/yolov5x.yaml\r\nchange anchors' shape to any other than default.\r\n\r\nOutput:\r\nthe anchors set in yaml file didn't activated .\r\n\r\n\r\n## Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n\r\n## Environment\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n - OS: [Ubuntu]\r\n - GPU [2080 Ti]\r\n\r\n\r\n## Additional context\r\nif the anchors set by user in yaml file, is more than 9 anchors, the bug didn't get triggered because it did not match the pretrain weight's anchors' shape.\r\n", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/462", "file_loc": "{'base_commit': '1e95337f3aec4c12244802bb6e493b07b27aa795', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {\"(None, 'train', 46)\": {'add': [132, 135], 'mod': [134]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "65d7a9b9902ad85f27b17d759bd13b59c2afc474", "iss_has_pr": 1, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/589", "iss_label": "", "title": "\"No API key provided\" - altough it is provided in the .env file", "body": "## Expected Behavior\r\n\r\nIf the OpenAI API key is provided in the .env file, it should be recognized and used.\r\n\r\n## Current Behavior\r\n\r\nRuntime error message: openai.error.AuthenticationError: No API key provided.\r\n\r\n### Steps to Reproduce\r\n\r\n1. Set the key in the .env file\r\n2. Run the app with gpt-engineer projects/my-new-project\r\n\r\n### Solution\r\n\r\nWhen I added the line `openai.api_key = os.getenv(\"OPENAI_API_KEY\")` to the end of the function `load_env_if_needed()` in the file `main.py`, as well as `import openai` at the beginning of this file _(thanks, engerlina, for reminder)_, the issue was resolved.", "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/592", "file_loc": "{'base_commit': '65d7a9b9902ad85f27b17d759bd13b59c2afc474', 'files': [{'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, \"(None, 'load_env_if_needed', 19)\": {'add': [21]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt_engineer/main.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "57dc58123b98e2026025cc87bdee474bf0656dcb", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4976", "iss_label": "bug\nWindows", "title": "Fix and document asyncio reactor problems on Windows", "body": "As described in https://twistedmatrix.com/trac/ticket/9766 you cannot just enable AsyncioSelectorReactor on Windows with recent Python, you either need fixed Twisted (which is not released yet, the merged fix is https://github.com/twisted/twisted/pull/1338) or, supposedly, add some manual fix as documented [here](https://github.com/twisted/twisted/blob/09b96850c2ebcb635f448ed3f9bbf5f157be3693/src/twisted/internet/asyncioreactor.py#L35-L44). So if it's possible to add this code to Scrapy we should probably do that, at least until the next Twisted release, and even after it we should document that new enough Twisted is needed in this use case. ", "pr_html_url": "https://github.com/scrapy/scrapy/pull/5315", "file_loc": "{'base_commit': '57dc58123b98e2026025cc87bdee474bf0656dcb', 'files': [{'path': '.github/workflows/tests-windows.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25]}}}, {'path': 'docs/topics/asyncio.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}, {'path': 'scrapy/utils/reactor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, \"(None, 'install_reactor', 53)\": {'add': [59]}}}, {'path': 'tests/CrawlerProcess/asyncio_enabled_reactor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 3]}}}, {'path': 'tests/test_commands.py', 'status': 'modified', 'Loc': {\"('RunSpiderCommandTest', None, 557)\": {'mod': [677, 678, 679, 702, 703, 704]}, \"('RunSpiderCommandTest', 'test_custom_asyncio_loop_enabled_false', 705)\": {'mod': [710]}}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [7]}, \"('CrawlerRunnerHasSpider', None, 231)\": {'mod': [287, 288, 289]}, \"('CrawlerProcessSubprocess', None, 323)\": {'mod': [331, 332, 333, 339, 340, 341, 380, 381, 382, 407, 408, 409]}}}, {'path': 'tests/test_downloader_handlers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, \"('HttpTestCase', None, 209)\": {'add': [289, 298]}, \"('FTPTestCase', None, 1055)\": {'add': [1057]}}}, {'path': 'tests/test_utils_asyncio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 2, 3]}, \"('AsyncioTest', None, 11)\": {'mod': [17, 18, 19]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/utils/reactor.py", "tests/CrawlerProcess/asyncio_enabled_reactor.py"], "doc": ["docs/topics/asyncio.rst"], "test": ["tests/test_utils_asyncio.py", "tests/test_crawler.py", "tests/test_commands.py", "tests/test_downloader_handlers.py"], "config": [".github/workflows/tests-windows.yml"], "asset": []}} -{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "9b4dfa195e3f23d81389745c26bff8e0087e74b0", "is_iss": 0, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/22046", "iss_label": "Bug\nIndexing", "title": "Replacing multiple columns (or just one) with iloc does not work", "body": "#### Code Sample, a copy-pastable example if possible\r\n\r\n```python\r\nimport pandas\r\n\r\ncolumns = pandas.DataFrame({'a2': [11, 12, 13], 'b2': [14, 15, 16]})\r\ninputs = pandas.DataFrame({'a1': [1, 2, 3], 'b1': [4, 5, 6], 'c1': [7, 8, 9]})\r\n\r\ninputs.iloc[:, [1]] = columns.iloc[:, [0]]\r\n\r\nprint(inputs)\r\n```\r\n\r\n#### Problem description\r\n\r\nI have a code which is replacing a set of columns with another set of columns, based on column indices. To make things done without a special case, I assumes I could just use `iloc` to both select and set columns in a DataFrame. But it seems that this not work and fails in strange ways.\r\n\r\n#### Expected Output\r\n\r\n```\r\n a1 b1 c1\r\n0 1 11 7\r\n1 2 12 8\r\n2 3 13 9\r\n```\r\n\r\nBut in reality, you get:\r\n\r\n```\r\n a1 b1 c1\r\n0 1.0 NaN 7.0\r\n1 2.0 NaN 8.0\r\n2 3.0 NaN 9.0\r\n```\r\n\r\nSee how values converted to float and how column is `NaN`s?\r\n\r\nBut, if I do the following I get expected results:\r\n\r\n```\r\ninputs.iloc[:, [1]] = [[11], [12], [13]]\r\n```\r\n\r\nThis also works:\r\n\r\n```\r\ninputs.iloc[:, [1]] = columns.iloc[:, [0]].values\r\n```\r\n\r\nSo if it works with lists and ndarrays, one would assume it would also work with DataFrames themselves. But it does not.\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n
\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.6.3.final.0\r\npython-bits: 64\r\nOS: Linux\r\nOS-release: 4.13.0-46-generic\r\nmachine: x86_64\r\nprocessor: x86_64\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.23.3\r\npytest: None\r\npip: 18.0\r\nsetuptools: 40.0.0\r\nCython: None\r\nnumpy: 1.15.0\r\nscipy: None\r\npyarrow: None\r\nxarray: None\r\nIPython: None\r\nsphinx: None\r\npatsy: None\r\ndateutil: 2.7.3\r\npytz: 2018.5\r\nblosc: None\r\nbottleneck: None\r\ntables: None\r\nnumexpr: None\r\nfeather: None\r\nmatplotlib: None\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: None\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: None\r\njinja2: None\r\ns3fs: None\r\nfastparquet: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n
\r\n", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/37728", "commit_html_url": null, "file_loc": "{'base_commit': '9b4dfa195e3f23d81389745c26bff8e0087e74b0', 'files': [{'path': 'doc/source/whatsnew/v1.2.0.rst', 'status': 'modified', 'Loc': {'(None, None, 591)': {'add': [591]}}}, {'path': 'pandas/core/indexing.py', 'status': 'modified', 'Loc': {\"('_LocationIndexer', '__setitem__', 675)\": {'mod': [684]}, \"('_iLocIndexer', None, 1322)\": {'mod': [1520, 1631, 1717, 1790]}, \"('_iLocIndexer', '_setitem_with_indexer', 1520)\": {'mod': [1596, 1627, 1629]}, \"('_iLocIndexer', '_setitem_with_indexer_split_path', 1631)\": {'mod': [1645, 1660]}, \"('_iLocIndexer', '_setitem_with_indexer_frame_value', 1717)\": {'mod': [1727]}, \"('_iLocIndexer', '_setitem_single_block', 1790)\": {'mod': [1819, 1825]}, \"('_iLocIndexer', '_setitem_with_indexer_missing', 1836)\": {'mod': [1857]}}}, {'path': 'pandas/tests/frame/indexing/test_setitem.py', 'status': 'modified', 'Loc': {\"('TestDataFrameSetItem', None, 24)\": {'mod': [292, 293, 294, 295, 296, 297, 298, 299]}}}, {'path': 'pandas/tests/indexing/test_iloc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [803]}, \"('TestILocSeries', 'test_iloc_getitem_nonunique', 966)\": {'add': [968]}}}, {'path': 'pandas/tests/indexing/test_indexing.py', 'status': 'modified', 'Loc': {\"('TestMisc', 'test_rhs_alignment', 668)\": {'mod': [671, 690, 696, 697, 700, 703, 707]}, \"('TestMisc', 'run_tests', 671)\": {'mod': [678, 682, 686]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/indexing.py"], "doc": ["doc/source/whatsnew/v1.2.0.rst"], "test": ["pandas/tests/frame/indexing/test_setitem.py", "pandas/tests/indexing/test_indexing.py", "pandas/tests/indexing/test_iloc.py"], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "5446c7e490e7203c61b2ff31181551b2c0f4a86b", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1430", "iss_label": "", "title": "DO NOT FORCE VALIDATE '{'Required Python packages'}' by default", "body": "**Bug description**\r\n`metagpt\\actions\\action_node.py\", line 432, in _aask_v1\r\n instruct_content = output_class(**parsed_data)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"..........\\Lib\\site-packages\\pydantic\\main.py\", line 171, in __init__\r\n self.__pydantic_validator__.validate_python(data, self_instance=self)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Required Python packages'} `\r\n\r\n**Bug solved method**\r\nDO NOT VALIDATE THIS FIELD. user may ask the agents to do non py related stuff,why would we force this validate and introduce a hard error? Seems silly.\r\n\r\n**Environment information**\r\nirrelevant\r\n\r\n- LLM type and model name:\r\n- MetaGPT version or branch:0.8.1\r\n\r\n\r\n**Screenshots or logs**\r\n`action_node.py\", line 432, in _aask_v1\r\n instruct_content = output_class(**parsed_data)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \".........\\Lib\\site-packages\\pydantic\\main.py\", line 171, in __init__\r\n self.__pydantic_validator__.validate_python(data, self_instance=self)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Required Python packages'} [type=value_error, input_value={'Required Rust packages'...ption for backup data.'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.6/v/value_error\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):`", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1435", "commit_html_url": null, "file_loc": "{'base_commit': '5446c7e490e7203c61b2ff31181551b2c0f4a86b', 'files': [{'path': 'metagpt/actions/design_api_an.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47], 'mod': [8, 50, 69]}}}, {'path': 'metagpt/actions/project_management_an.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8, 14]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/actions/design_api_an.py", "metagpt/actions/project_management_an.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "9b4dfa195e3f23d81389745c26bff8e0087e74b0", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/22046", "iss_label": "Bug\nIndexing", "title": "Replacing multiple columns (or just one) with iloc does not work", "body": "#### Code Sample, a copy-pastable example if possible\r\n\r\n```python\r\nimport pandas\r\n\r\ncolumns = pandas.DataFrame({'a2': [11, 12, 13], 'b2': [14, 15, 16]})\r\ninputs = pandas.DataFrame({'a1': [1, 2, 3], 'b1': [4, 5, 6], 'c1': [7, 8, 9]})\r\n\r\ninputs.iloc[:, [1]] = columns.iloc[:, [0]]\r\n\r\nprint(inputs)\r\n```\r\n\r\n#### Problem description\r\n\r\nI have a code which is replacing a set of columns with another set of columns, based on column indices. To make things done without a special case, I assumes I could just use `iloc` to both select and set columns in a DataFrame. But it seems that this not work and fails in strange ways.\r\n\r\n#### Expected Output\r\n\r\n```\r\n a1 b1 c1\r\n0 1 11 7\r\n1 2 12 8\r\n2 3 13 9\r\n```\r\n\r\nBut in reality, you get:\r\n\r\n```\r\n a1 b1 c1\r\n0 1.0 NaN 7.0\r\n1 2.0 NaN 8.0\r\n2 3.0 NaN 9.0\r\n```\r\n\r\nSee how values converted to float and how column is `NaN`s?\r\n\r\nBut, if I do the following I get expected results:\r\n\r\n```\r\ninputs.iloc[:, [1]] = [[11], [12], [13]]\r\n```\r\n\r\nThis also works:\r\n\r\n```\r\ninputs.iloc[:, [1]] = columns.iloc[:, [0]].values\r\n```\r\n\r\nSo if it works with lists and ndarrays, one would assume it would also work with DataFrames themselves. But it does not.\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n
\r\n\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.6.3.final.0\r\npython-bits: 64\r\nOS: Linux\r\nOS-release: 4.13.0-46-generic\r\nmachine: x86_64\r\nprocessor: x86_64\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.23.3\r\npytest: None\r\npip: 18.0\r\nsetuptools: 40.0.0\r\nCython: None\r\nnumpy: 1.15.0\r\nscipy: None\r\npyarrow: None\r\nxarray: None\r\nIPython: None\r\nsphinx: None\r\npatsy: None\r\ndateutil: 2.7.3\r\npytz: 2018.5\r\nblosc: None\r\nbottleneck: None\r\ntables: None\r\nnumexpr: None\r\nfeather: None\r\nmatplotlib: None\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: None\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: None\r\njinja2: None\r\ns3fs: None\r\nfastparquet: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n\r\n
\r\n", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/37728", "commit_html_url": null, "file_loc": "{'base_commit': '9b4dfa195e3f23d81389745c26bff8e0087e74b0', 'files': [{'path': 'doc/source/whatsnew/v1.2.0.rst', 'status': 'modified', 'Loc': {'(None, None, 591)': {'add': [591]}}}, {'path': 'pandas/core/indexing.py', 'status': 'modified', 'Loc': {\"('_LocationIndexer', '__setitem__', 675)\": {'mod': [684]}, \"('_iLocIndexer', None, 1322)\": {'mod': [1520, 1631, 1717, 1790]}, \"('_iLocIndexer', '_setitem_with_indexer', 1520)\": {'mod': [1596, 1627, 1629]}, \"('_iLocIndexer', '_setitem_with_indexer_split_path', 1631)\": {'mod': [1645, 1660]}, \"('_iLocIndexer', '_setitem_with_indexer_frame_value', 1717)\": {'mod': [1727]}, \"('_iLocIndexer', '_setitem_single_block', 1790)\": {'mod': [1819, 1825]}, \"('_iLocIndexer', '_setitem_with_indexer_missing', 1836)\": {'mod': [1857]}}}, {'path': 'pandas/tests/frame/indexing/test_setitem.py', 'status': 'modified', 'Loc': {\"('TestDataFrameSetItem', None, 24)\": {'mod': [292, 293, 294, 295, 296, 297, 298, 299]}}}, {'path': 'pandas/tests/indexing/test_iloc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [803]}, \"('TestILocSeries', 'test_iloc_getitem_nonunique', 966)\": {'add': [968]}}}, {'path': 'pandas/tests/indexing/test_indexing.py', 'status': 'modified', 'Loc': {\"('TestMisc', 'test_rhs_alignment', 668)\": {'mod': [671, 690, 696, 697, 700, 703, 707]}, \"('TestMisc', 'run_tests', 671)\": {'mod': [678, 682, 686]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/indexing.py"], "doc": ["doc/source/whatsnew/v1.2.0.rst"], "test": ["pandas/tests/frame/indexing/test_setitem.py", "pandas/tests/indexing/test_indexing.py", "pandas/tests/indexing/test_iloc.py"], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "5446c7e490e7203c61b2ff31181551b2c0f4a86b", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1430", "iss_label": "", "title": "DO NOT FORCE VALIDATE '{'Required Python packages'}' by default", "body": "**Bug description**\r\n`metagpt\\actions\\action_node.py\", line 432, in _aask_v1\r\n instruct_content = output_class(**parsed_data)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"..........\\Lib\\site-packages\\pydantic\\main.py\", line 171, in __init__\r\n self.__pydantic_validator__.validate_python(data, self_instance=self)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Required Python packages'} `\r\n\r\n**Bug solved method**\r\nDO NOT VALIDATE THIS FIELD. user may ask the agents to do non py related stuff,why would we force this validate and introduce a hard error? Seems silly.\r\n\r\n**Environment information**\r\nirrelevant\r\n\r\n- LLM type and model name:\r\n- MetaGPT version or branch:0.8.1\r\n\r\n\r\n**Screenshots or logs**\r\n`action_node.py\", line 432, in _aask_v1\r\n instruct_content = output_class(**parsed_data)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \".........\\Lib\\site-packages\\pydantic\\main.py\", line 171, in __init__\r\n self.__pydantic_validator__.validate_python(data, self_instance=self)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Required Python packages'} [type=value_error, input_value={'Required Rust packages'...ption for backup data.'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.6/v/value_error\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):`", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1435", "commit_html_url": null, "file_loc": "{'base_commit': '5446c7e490e7203c61b2ff31181551b2c0f4a86b', 'files': [{'path': 'metagpt/actions/design_api_an.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47], 'mod': [8, 50, 69]}}}, {'path': 'metagpt/actions/project_management_an.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8, 14]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/actions/design_api_an.py", "metagpt/actions/project_management_an.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "iss_has_pr": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/447", "iss_label": "dependencies", "title": "Pytorch synthesizer", "body": "Splitting this off from #370, which will remain for tensorflow2 conversion. I would prefer this route if we can get it to work. Asking for help from the community on this one.\r\n\r\nOne example of a pytorch-based tacotron is: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2\r\n\r\nAnother option is to manually convert the code and pretrained models which would be extremely time-consuming, but also an awesome learning experience.", "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472", "file_loc": "{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [18, 23, 24, 65, 66, 68, 70]}}}, {'path': 'demo_cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13, 43, 162], 'mod': [24, 25, 26, 30, 31, 32, 70, 76]}}}, {'path': 'demo_toolbox.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 32], 'mod': [23, 24, 25]}}}, {'path': 'encoder/audio.py', 'status': 'modified', 'Loc': {\"(None, 'preprocess_wav', 19)\": {'mod': [20, 43, 44]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16], 'mod': [1]}}}, {'path': 'requirements_gpu.txt', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/LICENSE.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 4]}}}, {'path': 'synthesizer/audio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}}}, {'path': 'synthesizer/feeder.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/hparams.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [348], 'mod': [1, 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 121, 122, 123, 124, 125, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 149, 150, 151, 152, 153, 154, 155, 157, 158, 159, 160, 161, 162, 164, 165, 166, 167, 168, 169, 170, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 190, 191, 192, 193, 194, 196, 197, 198, 199, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 231, 232, 233, 234, 235, 237, 238, 239, 240, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 342, 343, 344, 345, 347]}, \"(None, 'hparams_debug_string', 350)\": {'mod': [351, 352, 353]}}}, {'path': 'synthesizer/inference.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6], 'mod': [1, 2, 3, 4, 5, 9, 11]}, \"('Synthesizer', '__init__', 19)\": {'add': [33], 'mod': [21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59]}, \"('Synthesizer', 'griffin_lim', 149)\": {'add': [154]}, \"('Synthesizer', None, 15)\": {'mod': [19, 106, 107, 108, 109, 110, 111, 113, 114, 116, 117, 118, 119, 121]}, \"('Synthesizer', 'is_loaded', 61)\": {'mod': [63]}, \"('Synthesizer', 'load', 67)\": {'mod': [69, 70, 71, 72, 73, 74, 75]}, \"('Synthesizer', 'synthesize_spectrograms', 77)\": {'mod': [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 104]}}}, {'path': 'synthesizer/infolog.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/__init__.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/architecture_wrappers.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/attention.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/custom_decoder.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/helpers.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/modules.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/tacotron.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11], 'mod': [1, 2, 3, 4, 5, 6, 7, 8, 9]}, \"(None, 'split_func', 14)\": {'mod': [14, 15, 16, 17, 18, 19, 20, 21, 24, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 79, 81, 82, 84, 86, 87, 88, 89, 90, 91, 93, 94, 95, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 134, 135, 136, 137, 139, 140, 141, 142, 143, 145, 147, 148, 151, 153, 154, 155, 156, 157, 158, 160, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 198, 199, 200, 201, 202, 203, 205, 206, 207, 209, 210, 212, 213, 214, 215, 216, 217, 218, 220, 221, 222, 223, 225, 226, 228, 229, 230, 232, 233, 234, 235, 237, 238, 240, 241, 242, 243, 244, 245, 246, 247, 249, 250, 252, 253, 254, 256, 257, 259, 260, 261, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 277, 278, 279, 280, 281, 282, 283, 284, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 312, 313, 314, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 330, 331, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 369, 370, 371, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 385, 386, 387, 388, 389, 390, 391, 392, 394, 395, 396, 397, 398, 399, 400, 402, 403, 404, 405, 406, 407, 409, 410, 412, 413, 414, 415, 416, 417, 418, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 435, 436, 437, 439, 441, 442, 443, 444, 445, 446, 447, 448, 449, 451, 452, 454, 455, 456, 457, 458, 459, 460, 461, 462, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 479, 480, 481, 483, 484, 485, 486, 487, 488, 489, 491, 492, 493, 494, 495, 497, 498, 499, 501, 502, 504, 505, 507, 508, 509, 510, 512, 513, 514, 515, 516, 517, 518, 520, 521]}}}, {'path': 'synthesizer/preprocess.py', 'status': 'modified', 'Loc': {\"(None, 'process_utterance', 185)\": {'add': [204]}}}, {'path': 'synthesizer/synthesize.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82], 'mod': [1, 3, 4, 6, 7]}, \"(None, 'run_eval', 10)\": {'mod': [10, 11, 12, 14, 15, 16, 17, 18, 20, 21, 23, 24, 25, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37]}, \"(None, 'run_synthesis', 39)\": {'mod': [40, 41, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 62, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 77, 78, 80, 81]}}}, {'path': 'synthesizer/tacotron2.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 79, 83], 'mod': [3, 4, 5, 6, 7, 9, 10, 12, 14, 16, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 31, 32, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78]}, \"(None, 'model_train_mode', 85)\": {'mod': [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 133, 134, 135, 136, 138, 139, 141, 142, 143, 144, 146, 147, 148, 149]}, \"(None, 'train', 110)\": {'mod': [151, 152, 153, 154, 155, 156, 157, 159, 161, 167, 169, 171, 172, 173, 174, 176, 177, 178, 179, 181, 183, 184, 185, 186, 187, 189, 190, 191, 192, 194, 195, 196, 198, 199, 201, 202, 204, 205, 207, 208, 210, 212, 213, 214, 215, 216, 218, 219, 220, 222, 223, 224, 226, 227, 228, 230, 231, 232, 233, 234, 235, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 327, 328, 329, 330, 332, 333, 334, 335, 336, 337, 338, 339, 341, 342, 343, 344, 346, 347, 348, 349, 350, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 374, 375, 376, 377, 378, 379, 381, 382, 383, 385, 386, 387, 388, 391, 392]}}}, {'path': 'synthesizer/utils/__init__.py', 'status': 'modified', 'Loc': {\"('ValueWindow', None, 1)\": {'add': [0]}}}, {'path': 'synthesizer_train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 4, 6, 9, 10, 11, 12, 13, 14, 15, 16, 21, 22, 23, 24, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 53, 55]}}}, {'path': 'toolbox/__init__.py', 'status': 'modified', 'Loc': {\"('Toolbox', 'init_encoder', 325)\": {'add': [333]}, \"('Toolbox', None, 42)\": {'mod': [43]}, \"('Toolbox', '__init__', 43)\": {'mod': [54]}, \"('Toolbox', 'synthesize', 207)\": {'mod': [211, 212, 213, 214, 215, 216, 217, 221, 224, 228]}, \"('Toolbox', 'vocode', 237)\": {'mod': [243]}}}, {'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [41]}, \"('UI', None, 53)\": {'mod': [331]}, \"('UI', 'populate_models', 338)\": {'mod': [347, 348, 349, 350, 351, 352, 353]}}}, {'path': 'vocoder_preprocess.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32, 40], 'mod': [20]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["synthesizer/models/modules.py", "synthesizer/models/tacotron.py", "synthesizer/train.py", "synthesizer/models/attention.py", "synthesizer_train.py", "demo_cli.py", "toolbox/__init__.py", "demo_toolbox.py", "synthesizer/models/architecture_wrappers.py", "synthesizer/audio.py", "synthesizer/preprocess.py", "synthesizer/tacotron2.py", "synthesizer/hparams.py", "synthesizer/utils/__init__.py", "synthesizer/synthesize.py", "toolbox/ui.py", "encoder/audio.py", "synthesizer/feeder.py", "synthesizer/models/helpers.py", "synthesizer/models/__init__.py", "synthesizer/inference.py", "vocoder_preprocess.py", "synthesizer/models/custom_decoder.py", "synthesizer/infolog.py"], "doc": ["synthesizer/LICENSE.txt", "README.md"], "test": [], "config": ["requirements_gpu.txt", "requirements.txt"], "asset": []}} {"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "3c922603c0a7d1ad4113245a3d2bcd23bf4b1619", "iss_has_pr": 1, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/875", "iss_label": "Bug", "title": "NameError: name 'computer' is not defined ", "body": "### Describe the bug\n\nWhen I run `interpreter --os`\r\n\r\nAnd then attempt a command like:\r\n`Play a boiler room set on youtube`\r\n\r\nI get a `NameError`:\r\n\r\n```\r\n\u258c OS Control enabled \r\n\r\nTo find items on the screen, Open Interpreter has been instructed to send screenshots to api.openinterpreter.com (we do not store them). Add --offline to attempt this locally. \r\n\r\nMake sure that screen recording permissions are enabled for your Terminal or Python environment. \r\n\r\nWarning: In this mode, Open Interpreter will not require approval before performing actions. Be ready to close your terminal. \r\n\r\n> Play a boiler room set on youtube\r\n \r\n The first step is to open YouTube. We can do this by using a keyboard hotkey to open Spotlight and then typing \"Safari\" to open the browser. Once Safari is open, we can navigate \r\n to YouTube by writing the URL in the address bar. Let's start by opening Safari. \r\n \r\nExpecting value: line 1 column 1 (char 0)\r\nContinuing.\r\n\r\n \r\n computer.keyboard.hotkey(\"space\", \"command\") # Opens spotlight \r\n computer.keyboard.write(\"Safari\") \r\n computer.keyboard.press(\"enter\") # To open Safari \r\n \r\n \r\n --------------------------------------------------------------------------- \r\n NameError Traceback (most recent call last) \r\n Input In [4], in () \r\n 1 print('##active_line1##') \r\n ----> 2 computer.keyboard.hotkey('space', 'command') \r\n 3 print('##active_line2##') \r\n 4 computer.keyboard.write('Safari') \r\n \r\n NameError: name 'computer' is not defined \r\n \r\n \r\n It seems there was a mistake in my attempt to run the code. \"computer\" is not recognized as defined within the current execution environment. Let's correct this by using the \r\n appropriate syntax to execute the code on your machine. We will try to open Safari using the correct approach. Let's run the correct code. \r\n \r\nExpecting value: line 1 column 1 (char 0)\r\nContinuing.\r\n\r\n \r\n # Opens spotlight and types Safari to open the browser \r\n computer.keyboard.hotkey(\"space\", \"command\") \r\n computer.keyboard.write(\"Safari\") \r\n computer.keyboard.press(\"enter\") \r\n \r\n \r\n --------------------------------------------------------------------------- \r\n NameError Traceback (most recent call last) \r\n Input In [7], in () \r\n 2 pass \r\n 3 print('##active_line2##') \r\n ----> 4 computer.keyboard.hotkey('space', 'command') \r\n 5 print('##active_line3##') \r\n 6 computer.keyboard.write('Safari') \r\n \r\n NameError: name 'computer' is not defined \r\n```\r\n\r\nAnd it just gets stuck in this loop where computer is not defined.\r\n\r\n\r\n \n\n### Reproduce\n\n1. `interpreter --os` \r\n2. `Play a boiler room set on youtube`\r\n\n\n### Expected behavior\n\nFor it to be able to open Safari or my default web browser without a Name Error of computer.\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.2.0\n\n### Python version\n\n3.9.6\n\n### Operating System name and version\n\nmacOS 14.0\n\n### Additional context\n\nI have 2 python versions installed. 3.9.6 and 3.10.8. I installed interpreter on both. ", "pr_html_url": "https://github.com/OpenInterpreter/open-interpreter/pull/937", "file_loc": "{'base_commit': '3c922603c0a7d1ad4113245a3d2bcd23bf4b1619', 'files': [{'path': 'interpreter/core/computer/terminal/terminal.py', 'status': 'modified', 'Loc': {\"('Terminal', 'run', 36)\": {'mod': [40]}}}, {'path': 'interpreter/terminal_interface/start_terminal_interface.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}, \"(None, 'start_terminal_interface', 19)\": {'mod': [303, 544, 545, 546, 548, 593, 603, 608, 633]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["interpreter/core/computer/terminal/terminal.py", "interpreter/terminal_interface/start_terminal_interface.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "ad14f0e49929d426560413c0b9de19986cbeac9e", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/461", "iss_label": "bug", "title": "SileroTTS creates new audio file for each token", "body": "### Describe the bug\r\n\r\nI've just performed a fresh install to confirm this.\r\n\r\nUnless i turn on no stream, SileroTTS will attempt to create an audio file for each word / token. \r\n\r\nSilero should not attempt to create audio until the response is complete.\r\n\r\nSilero extension output directory is being filled up with audio clips that only add one word to the previous file. Is this known to be broken like this?\r\n\r\nTurning off stream works, but it means that the text stream doesn't work. Is there a way to turn off streaming for Silero only?\r\n\r\n\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n1. Enable Silero Extension\r\n2. Disable Auto Play\r\n3. Start Chat\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nN/A\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nWindows 11 / Firefox or Edge\r\n```\r\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/192", "file_loc": "{'base_commit': 'ad14f0e49929d426560413c0b9de19986cbeac9e', 'files': [{'path': 'extensions/silero_tts/script.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 5, 14, 35], 'mod': [10, 18]}, \"(None, 'input_modifier', 36)\": {'add': [41]}, \"(None, 'output_modifier', 44)\": {'add': [59, 65, 67], 'mod': [49, 69, 70, 72, 73]}, \"(None, 'ui', 86)\": {'add': [92, 93], 'mod': [88, 89]}}}, {'path': 'modules/shared.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}}}, {'path': 'modules/text_generation.py', 'status': 'modified', 'Loc': {\"(None, 'generate_reply', 88)\": {'add': [189, 202, 205, 219, 224], 'mod': [199, 216]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["extensions/silero_tts/script.py", "modules/shared.py", "modules/text_generation.py"], "doc": [], "test": [], "config": [], "asset": []}} @@ -142,13 +142,13 @@ {"organization": "huggingface", "repo_name": "transformers", "base_commit": "eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/8171", "iss_label": "New model", "title": "Need suggestion on contributing TFDPR", "body": "# \ud83c\udf1f New model addition\r\n\r\n## Model description\r\nHi, I would love to try contributing TFDPR . This is the first time to me, so I need some suggestions.\r\nI have followed @sshleifer 's [great PR on TFBart model](https://github.com/huggingface/transformers/commit/829842159efeb1f920cbbb1daf5ad67e0114d0b9) on 4 files :` __init__.py , convert_pytorch_checkpoint_to_tf2.py , utils/dummy_tf_objects.py` and (newly created) `modeling_tf_dpr.py `\r\n\r\nNow the TF model works properly and can load Pytorch's weights successfully the same output as Pytorch's counterparts **except** small random noise (1e-5) which I suspect of some dtypes different , but I could not find the cause. \r\n\r\nI guess I need to add document on docs/source/model_doc/dpr.rst , and that's all ? \r\n**My question is do I need to change / fix any other files ? and/or do I need to do some other thing before making PR ?**\r\n\r\n\r\nTo resolve TF vs. Pytorch naming issues, there's one change regarding `TFBertModel` vs. `TFBertMainLayer` as [discussed here](https://discuss.huggingface.co/t/solved-issue-on-translating-dpr-to-tfdpr-on-loading-pytorch-weights-to-tf-model/1764) .\r\nThanks to @sshleifer for his help to solve the issue.\r\n\r\n## Open source status\r\n\r\n* [X] the model implementation is available: (give details)\r\nYou can see all the modified codes with test run at : \r\nhttps://colab.research.google.com/drive/1lU4fx7zkr-Y3CXa3wmHIY8yJhKdiN3DI?usp=sharing\r\n(to easily navigate the changes, please \u201cfind on page\u201d for e.g. `TFDPRContextEncoder` )\r\n\r\n* [X] the model weights are available: (give details)\r\nAt the moment, I use existing Pytorch weights, but will upload TF weights too.\r\n\r\n* [X] who are the authors: (mention them, if possible by @gh-username)\r\n@ratthachat ", "pr_html_url": "https://github.com/huggingface/transformers/pull/8203", "file_loc": "{'base_commit': 'eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d', 'files': [{'path': 'docs/source/model_doc/dpr.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [101]}}}, {'path': 'src/transformers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [408, 715]}}}, {'path': 'src/transformers/convert_pytorch_checkpoint_to_tf2.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27, 45, 61, 100, 149]}}}, {'path': 'src/transformers/modeling_tf_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45, 89, 194]}}}, {'path': 'src/transformers/utils/dummy_pt_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [737]}}}, {'path': 'src/transformers/utils/dummy_tf_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [497]}}}, {'path': 'tests/test_modeling_dpr.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [26]}, \"('DPRModelTest', 'test_model_from_pretrained', 214)\": {'add': [229]}}}, {'path': 'utils/check_repo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [35, 59, 89]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/utils/dummy_pt_objects.py", "src/transformers/utils/dummy_tf_objects.py", "src/transformers/__init__.py", "src/transformers/modeling_tf_auto.py", "utils/check_repo.py", "src/transformers/convert_pytorch_checkpoint_to_tf2.py"], "doc": ["docs/source/model_doc/dpr.rst"], "test": ["tests/test_modeling_dpr.py"], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "9bee9ff5db6e68fb31065898d7e924d07c1eb9c1", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/34390", "iss_label": "bug", "title": "[mask2former] torch.export error for Mask2Former", "body": "### System Info\r\n\r\n- `transformers` version: 4.46.0.dev0\r\n- Platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35\r\n- Python version: 3.11.9\r\n- Huggingface_hub version: 0.25.2\r\n- Safetensors version: 0.4.5\r\n- Accelerate version: not installed\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 2.4.1+cu121 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using distributed or parallel set-up in script?: \r\n- Using GPU in script?: \r\n- GPU type: NVIDIA GeForce RTX 4090\r\n\r\n### Who can help?\r\n\r\n@amyeroberts, @qubvel, @ylacombe\r\n\r\n### Information\r\n\r\n- [ ] The official example scripts\r\n- [X] My own modified scripts\r\n\r\n### Tasks\r\n\r\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\r\n- [ ] My own task or dataset (give details below)\r\n\r\n### Reproduction\r\n\r\n```python\r\nimport torch\r\nfrom transformers import Mask2FormerForUniversalSegmentation\r\n\r\nmodel = Mask2FormerForUniversalSegmentation.from_pretrained(\r\n \"facebook/mask2former-swin-base-coco-panoptic\", torchscript=True\r\n)\r\n\r\nscripted_model = torch.export.export(model, args=(torch.randn(1, 3, 800, 1280),))\r\n```\r\nwhich causes\r\n```\r\nUserError: Could not extract specialized integer from data-dependent expression u0 (unhinted: u0). (Size-like symbols: none)\r\n\r\nPotential framework code culprit (scroll up for full backtrace):\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/_dynamo/utils.py\", line 2132, in run_node\r\n return node.target(*args, **kwargs)\r\n\r\nFor more information, run with TORCH_LOGS=\"dynamic\"\r\nFor extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL=\"u0\"\r\nIf you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1\r\nFor more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing\r\n\r\nUser Stack (most recent call last):\r\n (snipped, see stack below for prefix)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 2499, in forward\r\n outputs = self.model(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 2270, in forward\r\n pixel_level_module_output = self.pixel_level_module(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1395, in forward\r\n decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1319, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1165, in forward\r\n reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=inputs_embeds.device)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1106, in get_reference_points\r\n torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device),\r\n\r\nFor C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1\r\nFor more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example\r\n\r\nfrom user code:\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 2499, in forward\r\n outputs = self.model(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 2270, in forward\r\n pixel_level_module_output = self.pixel_level_module(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1395, in forward\r\n decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1319, in forward\r\n encoder_outputs = self.encoder(\r\n File \"/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py\", line 1747, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1165, in forward\r\n reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=inputs_embeds.device)\r\n File \"/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py\", line 1106, in get_reference_points\r\n torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device),\r\n ```\r\n\r\n### Expected behavior\r\n\r\ntorch.export works for this model.", "pr_html_url": "https://github.com/huggingface/transformers/pull/34393", "file_loc": "{'base_commit': '9bee9ff5db6e68fb31065898d7e924d07c1eb9c1', 'files': [{'path': 'src/transformers/models/mask2former/modeling_mask2former.py', 'status': 'modified', 'Loc': {\"('Mask2FormerPixelDecoder', 'forward', 1280)\": {'add': [1333], 'mod': [1305, 1307, 1323, 1337, 1339, 1341, 1345]}, \"('Mask2FormerPixelDecoderEncoderMultiscaleDeformableAttention', 'forward', 921)\": {'mod': [929, 939, 960, 973]}, \"('Mask2FormerPixelDecoderEncoderLayer', 'forward', 998)\": {'mod': [1004, 1018, 1019, 1036]}, \"('Mask2FormerPixelDecoderEncoderOnly', None, 1069)\": {'mod': [1089]}, \"('Mask2FormerPixelDecoderEncoderOnly', 'get_reference_points', 1089)\": {'mod': [1094, 1095, 1104]}, \"('Mask2FormerPixelDecoderEncoderOnly', 'forward', 1120)\": {'mod': [1125, 1143, 1144, 1165, 1179]}, \"('Mask2FormerMaskedAttentionDecoder', 'forward', 1792)\": {'mod': [1879]}}}, {'path': 'tests/models/mask2former/test_modeling_mask2former.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22]}, \"('Mask2FormerModelIntegrationTest', 'test_with_segmentation_maps_and_loss', 466)\": {'add': [483]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["src/transformers/models/mask2former/modeling_mask2former.py"], "doc": [], "test": ["tests/models/mask2former/test_modeling_mask2former.py"], "config": [], "asset": []}} {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "19d0942c74731d797a3590b1d8d46ece5a6d751f", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3077", "iss_label": "bug\nupstream issue", "title": "scrapy selector fails when large lines are present response", "body": "Originally encoutered when scraping [Amazon restaurant](https://www.amazon.com/restaurants/zzzuszimbos0015gammaloc1name-new-york/d/B01HH7CS44?ref_=amzrst_pnr_cp_b_B01HH7CS44_438). \r\nThis page contains multiple script tag with lines greater then 64,000 character in one line. \r\nThe selector (xpath and css) does not search beyond these lines. \r\n\r\nDue to this the following xpath `'//h1[contains(@class, \"hw-dp-restaurant-name\")]/text()'` to extract name of the restaurant returns empty even though there is a matching tag is present.\r\n\r\n\r\nPFA the response text at [original_response.html.txt.gz](https://github.com/scrapy/scrapy/files/1631425/original_response.html.txt.gz)\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/261", "file_loc": "{'base_commit': '19d0942c74731d797a3590b1d8d46ece5a6d751f', 'files': [{'path': 'docs/contributing.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [76]}}}, {'path': 'scrapy/tests/test_utils_url.py', 'status': 'modified', 'Loc': {\"('UrlUtilsTest', None, 8)\": {'add': [50]}, '(None, None, None)': {'mod': [3, 4]}, \"('MySpider', 'test_url_is_from_spider_with_allowed_domains_class_attributes', 52)\": {'mod': [54]}}}, {'path': 'scrapy/utils/url.py', 'status': 'modified', 'Loc': {\"(None, 'url_is_from_spider', 25)\": {'mod': [27, 28]}, \"(None, 'canonicalize_url', 33)\": {'mod': [33]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/utils/url.py"], "doc": ["docs/contributing.rst"], "test": ["scrapy/tests/test_utils_url.py"], "config": [], "asset": []}} -{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "953757a3e37ffb80570a20a8eca52dae35fc27bb", "is_iss": 0, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/22471", "iss_label": "Testing\nClean\ngood first issue", "title": "TST/CLN: remove TestData from frame-tests; replace with fixtures", "body": "Following review in #22236: \r\n> ok, pls open a new issue that refs this, to remove use of `TestData` in favor of fixtures\r\n\r\nStarted the process in that PR by creating a `conftest.py` that translates all the current attributes of `TestData` to fixtures, with the following \"translation guide\":\r\n\r\n* `frame` -> `float_frame`\r\n* `frame2` -> `float_frame2`\r\n* `intframe` -> `int_frame`\r\n* `tsframe` -> `datetime_frame`\r\n* `mixed_frame` -> `float_string_frame`\r\n* `mixed_float` -> `mixed_float_frame`\r\n* `mixed_float2` -> `mixed_float_frame2`\r\n* `mixed_int` -> `mixed_int_frame`\r\n* `all_mixed` -> `mixed_type_frame`\r\n* `tzframe` -> `timezone_frame`\r\n* `empty` -> `empty_frame`\r\n* `ts1` -> `datetime_series`\r\n* `ts2` -> `datetime_series_short`\r\n* `simple` -> `simple_frame`\r\n\r\nNeed to incrementally replace their usages in `pandas/tests/frame/` (example below).\r\n\r\n- [x] Create `conftest.py` and translate `TestData`-attributes into fixtures (#22236)\r\n- [x] `test_alter_axes.py` (#22236)\r\n- [x] `test_analytics.py` (#22733)\r\n- [x] `test_api.py` (#22738)\r\n- [x] `test_apply.py` (#22735)\r\n- [x] `test_arithmetic.py` (#22736)\r\n- [x] `test_asof.py` (#25628)\r\n- [x] `test_axis_select_reindex.py` (#25627)\r\n- [x] `test_block_internals.py` (#22926)\r\n- [x] `test_combine_concat.py` (#25634)\r\n- [ ] `test_constructors.py` (#25635)\r\n- [ ] `test_convert_to.py`\r\n- [ ] `test_dtypes.py` (#25636)\r\n- [x] `test_duplicates.py`\r\n- [x] `test_indexing.py` (#25633)\r\n- [x] `test_join.py` (#25639)\r\n- [x] `test_missing.py` (#25640)\r\n- [x] `test_mutate_columns.py` (#25642)\r\n- [ ] `test_nonunique_indexes.py`\r\n- [x] `test_operators.py` (#25641)\r\n- [ ] `test_period.py`\r\n- [ ] `test_quantile.py`\r\n- [ ] `test_query_eval.py`\r\n- [ ] `test_rank.py`\r\n- [ ] `test_replace.py`\r\n- [ ] `test_repr_info.py`\r\n- [ ] `test_reshape.py`\r\n- [ ] `test_sort_values_level_as_str.py`\r\n- [ ] `test_sorting.py`\r\n- [ ] `test_subclass.py`\r\n- [ ] `test_timeseries.py`\r\n- [ ] `test_timezones.py`\r\n- [ ] `test_to_csv.py`\r\n- [ ] `test_validate.py`\r\n\r\nThings for follow-ups:\r\n- Remove other class-based test-methods\r\n- Turn tests from class- to function-based\r\n\r\nAn example from #22236 - before:\r\n```\r\ndef test_set_columns(self):\r\n cols = Index(np.arange(len(self.mixed_frame.columns)))\r\n self.mixed_frame.columns = cols\r\n with tm.assert_raises_regex(ValueError, 'Length mismatch'):\r\n self.mixed_frame.columns = cols[::2]\r\n```\r\nAfter:\r\n```\r\ndef test_set_columns(self, float_string_frame):\r\n cols = Index(np.arange(len(float_string_frame.columns)))\r\n float_string_frame.columns = cols\r\n with tm.assert_raises_regex(ValueError, 'Length mismatch'):\r\n float_string_frame.columns = cols[::2]\r\n```\r\n\r\nBasically, it comes down to replacing all the occurrences of `self.` with `translation_guide[]` (and specifying`` as a parameter to the function).\r\n\r\nPS. Note that some fixtures added by #22236 have now been removed by #24885. Please check #24885 which code was removed, in case you should need it for the fixturisation. Alternatively, you can ping me, @jbrockmendel or @jreback.", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/29226", "commit_html_url": null, "file_loc": "{'base_commit': '953757a3e37ffb80570a20a8eca52dae35fc27bb', 'files': [{'path': 'pandas/tests/frame/common.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3, 5, 6, 8, 9, 11, 12, 13, 15, 17, 18, 21, 22, 23, 24, 26, 27, 28, 30, 31, 32, 33, 35, 36, 37, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 106, 107, 108, 110, 111, 112, 114, 115, 116, 118, 121, 122]}}}, {'path': 'pandas/tests/frame/test_indexing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [28]}, \"('TestDataFrameIndexing', None, 39)\": {'mod': [39]}, \"('TestDataFrameIndexing', 'test_setitem_fancy_mixed_2d', 1166)\": {'mod': [1170, 1171]}, \"('TestDataFrameIndexingDatetimeWithTZ', None, 3405)\": {'mod': [3405]}, \"('TestDataFrameIndexingUInt64', None, 3464)\": {'mod': [3464]}}}, {'path': 'pandas/tests/frame/test_query_eval.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12]}, \"('TestDataFrameQueryNumExprPython', 'setup_class', 703)\": {'mod': [707]}, \"('TestDataFrameQueryPythonPandas', 'setup_class', 807)\": {'mod': [811]}, \"('TestDataFrameQueryPythonPython', 'setup_class', 827)\": {'mod': [830]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/tests/frame/common.py"], "doc": [], "test": ["pandas/tests/frame/test_indexing.py", "pandas/tests/frame/test_query_eval.py"], "config": [], "asset": []}} +{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "953757a3e37ffb80570a20a8eca52dae35fc27bb", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/22471", "iss_label": "Testing\nClean\ngood first issue", "title": "TST/CLN: remove TestData from frame-tests; replace with fixtures", "body": "Following review in #22236: \r\n> ok, pls open a new issue that refs this, to remove use of `TestData` in favor of fixtures\r\n\r\nStarted the process in that PR by creating a `conftest.py` that translates all the current attributes of `TestData` to fixtures, with the following \"translation guide\":\r\n\r\n* `frame` -> `float_frame`\r\n* `frame2` -> `float_frame2`\r\n* `intframe` -> `int_frame`\r\n* `tsframe` -> `datetime_frame`\r\n* `mixed_frame` -> `float_string_frame`\r\n* `mixed_float` -> `mixed_float_frame`\r\n* `mixed_float2` -> `mixed_float_frame2`\r\n* `mixed_int` -> `mixed_int_frame`\r\n* `all_mixed` -> `mixed_type_frame`\r\n* `tzframe` -> `timezone_frame`\r\n* `empty` -> `empty_frame`\r\n* `ts1` -> `datetime_series`\r\n* `ts2` -> `datetime_series_short`\r\n* `simple` -> `simple_frame`\r\n\r\nNeed to incrementally replace their usages in `pandas/tests/frame/` (example below).\r\n\r\n- [x] Create `conftest.py` and translate `TestData`-attributes into fixtures (#22236)\r\n- [x] `test_alter_axes.py` (#22236)\r\n- [x] `test_analytics.py` (#22733)\r\n- [x] `test_api.py` (#22738)\r\n- [x] `test_apply.py` (#22735)\r\n- [x] `test_arithmetic.py` (#22736)\r\n- [x] `test_asof.py` (#25628)\r\n- [x] `test_axis_select_reindex.py` (#25627)\r\n- [x] `test_block_internals.py` (#22926)\r\n- [x] `test_combine_concat.py` (#25634)\r\n- [ ] `test_constructors.py` (#25635)\r\n- [ ] `test_convert_to.py`\r\n- [ ] `test_dtypes.py` (#25636)\r\n- [x] `test_duplicates.py`\r\n- [x] `test_indexing.py` (#25633)\r\n- [x] `test_join.py` (#25639)\r\n- [x] `test_missing.py` (#25640)\r\n- [x] `test_mutate_columns.py` (#25642)\r\n- [ ] `test_nonunique_indexes.py`\r\n- [x] `test_operators.py` (#25641)\r\n- [ ] `test_period.py`\r\n- [ ] `test_quantile.py`\r\n- [ ] `test_query_eval.py`\r\n- [ ] `test_rank.py`\r\n- [ ] `test_replace.py`\r\n- [ ] `test_repr_info.py`\r\n- [ ] `test_reshape.py`\r\n- [ ] `test_sort_values_level_as_str.py`\r\n- [ ] `test_sorting.py`\r\n- [ ] `test_subclass.py`\r\n- [ ] `test_timeseries.py`\r\n- [ ] `test_timezones.py`\r\n- [ ] `test_to_csv.py`\r\n- [ ] `test_validate.py`\r\n\r\nThings for follow-ups:\r\n- Remove other class-based test-methods\r\n- Turn tests from class- to function-based\r\n\r\nAn example from #22236 - before:\r\n```\r\ndef test_set_columns(self):\r\n cols = Index(np.arange(len(self.mixed_frame.columns)))\r\n self.mixed_frame.columns = cols\r\n with tm.assert_raises_regex(ValueError, 'Length mismatch'):\r\n self.mixed_frame.columns = cols[::2]\r\n```\r\nAfter:\r\n```\r\ndef test_set_columns(self, float_string_frame):\r\n cols = Index(np.arange(len(float_string_frame.columns)))\r\n float_string_frame.columns = cols\r\n with tm.assert_raises_regex(ValueError, 'Length mismatch'):\r\n float_string_frame.columns = cols[::2]\r\n```\r\n\r\nBasically, it comes down to replacing all the occurrences of `self.` with `translation_guide[]` (and specifying`` as a parameter to the function).\r\n\r\nPS. Note that some fixtures added by #22236 have now been removed by #24885. Please check #24885 which code was removed, in case you should need it for the fixturisation. Alternatively, you can ping me, @jbrockmendel or @jreback.", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/29226", "commit_html_url": null, "file_loc": "{'base_commit': '953757a3e37ffb80570a20a8eca52dae35fc27bb', 'files': [{'path': 'pandas/tests/frame/common.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3, 5, 6, 8, 9, 11, 12, 13, 15, 17, 18, 21, 22, 23, 24, 26, 27, 28, 30, 31, 32, 33, 35, 36, 37, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 106, 107, 108, 110, 111, 112, 114, 115, 116, 118, 121, 122]}}}, {'path': 'pandas/tests/frame/test_indexing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [28]}, \"('TestDataFrameIndexing', None, 39)\": {'mod': [39]}, \"('TestDataFrameIndexing', 'test_setitem_fancy_mixed_2d', 1166)\": {'mod': [1170, 1171]}, \"('TestDataFrameIndexingDatetimeWithTZ', None, 3405)\": {'mod': [3405]}, \"('TestDataFrameIndexingUInt64', None, 3464)\": {'mod': [3464]}}}, {'path': 'pandas/tests/frame/test_query_eval.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12]}, \"('TestDataFrameQueryNumExprPython', 'setup_class', 703)\": {'mod': [707]}, \"('TestDataFrameQueryPythonPandas', 'setup_class', 807)\": {'mod': [811]}, \"('TestDataFrameQueryPythonPython', 'setup_class', 827)\": {'mod': [830]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/tests/frame/common.py"], "doc": [], "test": ["pandas/tests/frame/test_indexing.py", "pandas/tests/frame/test_query_eval.py"], "config": [], "asset": []}} {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "98efd264560983ed1d383222e3d5d22ed87169be", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/75", "iss_label": "API access", "title": "API Rate Limit Reached with new key", "body": "I just create a new key and it's failing to run:\r\n```\r\nContinue (y/n): y\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\nError: API Rate Limit Reached. Waiting 10 seconds...\r\n```\r\n\r\n![image](https://user-images.githubusercontent.com/1215497/229545231-7b463bc9-4630-45d5-a8cc-41df10e4e4be.png)\r\n", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/1304", "file_loc": "{'base_commit': '98efd264560983ed1d383222e3d5d22ed87169be', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [147], 'mod': [108]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} {"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "6a03ad082492268d60fa23ba5f3dcebd1630593e", "iss_has_pr": 1, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/317", "iss_label": "enhancement", "title": "Support for ChatGLM", "body": "**Description**\r\n\r\n[ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B)\r\n\r\nA Chinese chat AI based on GLM was released by THU.\r\n", "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/1256", "file_loc": "{'base_commit': '6a03ad082492268d60fa23ba5f3dcebd1630593e', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [221]}}}, {'path': 'download-model.py', 'status': 'modified', 'Loc': {\"(None, 'get_download_links_from_huggingface', 82)\": {'mod': [111]}}}, {'path': 'models/config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47]}}}, {'path': 'modules/chat.py', 'status': 'modified', 'Loc': {\"(None, 'generate_chat_prompt', 21)\": {'mod': [52, 63]}}}, {'path': 'modules/models.py', 'status': 'modified', 'Loc': {\"(None, 'load_model', 41)\": {'add': [46, 122], 'mod': [50, 82, 159, 168, 188]}, '(None, None, None)': {'mod': [13, 14]}}}, {'path': 'modules/shared.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [115, 164]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["modules/shared.py", "modules/chat.py", "download-model.py", "modules/models.py"], "doc": ["README.md"], "test": [], "config": ["models/config.yaml"], "asset": []}} {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "130601e076ec5ca8298b95c3d02122ac5d8cf8eb", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/2372", "iss_label": "Bug\nModerate", "title": "StratifiedKFold should do its best to preserve the dataset dependency structure", "body": "As highlighted in this [notebook](http://nbviewer.ipython.org/urls/raw.github.com/ogrisel/notebooks/master/Non%2520IID%2520cross-validation.ipynb) the current implementation of `StratifiedKFold` (which is used by default by `cross_val_score` and `GridSearchCV` for classification problems) breaks the dependency structure of the dataset by computing the folds based on the sorted labels.\n\nInstead one should probably do an implementation that performs individual dependency preserving KFold on for each possible label value and aggregate the folds to get the `StratifiedKFold` final folds.\n\nThis might incur a refactoring to get rid of the `_BaseKFold` base class. It might also make it easier to implement a `shuffle=True` option for `StratifiedKFold`.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/2463", "file_loc": "{'base_commit': '130601e076ec5ca8298b95c3d02122ac5d8cf8eb', 'files': [{'path': 'doc/modules/cross_validation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [108, 109, 115, 122, 123, 124, 125, 200, 201, 205, 206, 209, 210]}}}, {'path': 'doc/tutorial/statistical_inference/model_selection.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [146, 148, 149, 150, 151, 166, 167]}}}, {'path': 'doc/whats_new.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [46, 2290], 'mod': [784]}}}, {'path': 'sklearn/cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}, \"('StratifiedKFold', '__init__', 375)\": {'add': [385], 'mod': [378, 379]}, \"('StratifiedKFold', None, 335)\": {'mod': [388, 389, 390, 391, 392]}}}, {'path': 'sklearn/feature_selection/tests/test_rfe.py', 'status': 'modified', 'Loc': {\"(None, 'test_rfecv', 64)\": {'add': [78], 'mod': [72, 80, 85, 86, 87, 90, 96, 97, 101, 106, 107]}}}, {'path': 'sklearn/tests/test_cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 24, 93], 'mod': [152]}, \"(None, 'test_kfold_valueerrors', 95)\": {'add': [112], 'mod': [103, 104]}, \"(None, 'test_kfold_indices', 127)\": {'mod': [130, 131, 132, 133, 134, 135, 137, 138]}, \"(None, 'test_shuffle_kfold', 153)\": {'mod': [156, 157, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 174, 175]}, \"(None, 'test_cross_val_score_with_score_func_classification', 376)\": {'mod': [382, 388, 394, 399]}, \"(None, 'test_permutation_score', 429)\": {'mod': [453, 473, 480]}}}, {'path': 'sklearn/tests/test_naive_bayes.py', 'status': 'modified', 'Loc': {\"(None, 'test_check_accuracy_on_digits', 330)\": {'mod': [332, 333, 341, 344, 348, 351, 355]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/cross_validation.py"], "doc": ["doc/modules/cross_validation.rst", "doc/tutorial/statistical_inference/model_selection.rst", "doc/whats_new.rst"], "test": ["sklearn/tests/test_naive_bayes.py", "sklearn/feature_selection/tests/test_rfe.py", "sklearn/tests/test_cross_validation.py"], "config": [], "asset": []}} {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "6ff8478118935b72c35f3ec1b31e74f2a1aa2e90", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/528", "iss_label": "enhancement\ngood first issue\npotential plugin\nStale", "title": "Auto-GPT System Awareness", "body": "### System Awareness\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Summary \ud83d\udca1\r\n\r\nBefore going out to look at the internet \r\nIt would be helpful if upon activation the AI took inventory of the system it was on and shared the available tools and capabilities\r\nand if they were insufficient begin researching and developing GAP tools to use during the session with the expressed request to push the GAP tools via PR back to the community\r\n\r\n### Examples \ud83c\udf08\r\n\r\nAI System initializing\r\n- MacOS \r\n- Python3\r\n- Pip\r\n- Shell Commands available...\r\n- Desktop App skills available...\r\n\r\nWhat are your goals?\r\n\r\n### Motivation \ud83d\udd26\r\n\r\nusuability ", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/4548", "file_loc": "{'base_commit': '6ff8478118935b72c35f3ec1b31e74f2a1aa2e90', 'files': [{'path': '.github/PULL_REQUEST_TEMPLATE.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [44]}}}, {'path': '.github/workflows/ci.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72]}}}, {'path': '.pre-commit-config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [34]}}}, {'path': 'autogpt/plugins.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 5]}, \"(None, 'scan_plugins', 203)\": {'add': [219]}}}, {'path': 'scripts/install_plugin_deps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 32]}, \"(None, 'install_plugin_dependencies', 8)\": {'add': [18]}}}, {'path': 'tests/integration/test_plugins.py', 'status': 'modified', 'Loc': {\"('MockConfig', 'mock_config_openai_plugin', 37)\": {'mod': [42]}, \"('MockConfig', 'mock_config_generic_plugin', 59)\": {'mod': [63]}, \"(None, 'test_scan_plugins_generic', 68)\": {'mod': [71]}}}, {'path': 'tests/integration/test_web_selenium.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 4, 6]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["autogpt/plugins.py", "scripts/install_plugin_deps.py"], "doc": [".github/PULL_REQUEST_TEMPLATE.md"], "test": ["tests/integration/test_web_selenium.py", "tests/integration/test_plugins.py"], "config": [".github/workflows/ci.yml", ".pre-commit-config.yaml"], "asset": []}} {"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "707ab7b3f84fb5664ff63da0b52e7b0d2e4df545", "iss_has_pr": 1, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/908", "iss_label": "bug", "title": "Agent stuck in the \"starting task\" step--Unsupported Protocol", "body": "\r\n#### Describe the bug\r\n\r\nI asked the agent to build a calculator, but it didn't give me any response, just stuck in the starting step.\r\n\r\n#### Setup and configuration\r\n**Current version**:\r\n\r\n```bash\r\ncommit e9121b78fed0b5ef36718ca0bf59588c0b094b86 (HEAD -> main)\r\nAuthor: Xingyao Wang \r\nDate: Sun Apr 7 16:07:59 2024 +0800\r\n```\r\n\r\n\r\n**My config.toml and environment vars** (be sure to redact API keys):\r\n```toml\r\nLLM_MODEL=\"gpt-3.5-turbo-1106\"\r\nLLM_API_KEY=\"already set, and have test in python script, which works\"\r\nLLM_EMBEDDING_MODEL=\"openai\"\r\nWORKSPACE_DIR=\"./workspace\"\r\n```\r\n\r\n**My model and agent** (you can see these settings in the UI):\r\n* Model: PlannerAgent\r\n* Agent: gpt-3.5-turbo-1106\r\n\r\n**Commands I ran to install and run OpenDevin**:\r\n```\r\nmake build \r\nmake run\r\n```\r\n\r\n**Steps to Reproduce**:\r\nrun the commands, input: build a calculator with python\r\n\r\n**Logs, error messages, and screenshots**:\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/86202027/b078b7b0-446d-4e9d-bc83-ff3c270d9512)\r\nbackend:\r\n```\r\nINFO: 127.0.0.1:34564 - \"GET /litellm-agents HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:34572 - \"GET /messages/total HTTP/1.1\" 200 OK\r\nINFO: 127.0.0.1:34584 - \"DELETE /messages HTTP/1.1\" 200 OK\r\n\r\n\r\n==============\r\nSTEP 0\r\n\r\nPLAN:\r\nbuild a calculator with python\r\n\r\nINFO:\r\nHINT:\r\nYou're not currently working on any tasks. Your next action MUST be to mark a task as in_progress.\r\n```\r\n\r\nfrontend:\r\n```\r\n22:35:39 - opendevin:INFO: sandbox.py:117 - Using workspace directory: /mnt/d/OpenDevin/workspace\r\n22:35:39 - opendevin:INFO: sandbox.py:257 - Container stopped\r\n22:35:39 - opendevin:INFO: sandbox.py:277 - Container started\r\n22:37:54 - opendevin:INFO: sandbox.py:117 - Using workspace directory: /mnt/d/OpenDevin/workspace\r\n```\r\n\r\nllm prompt_001:\r\n```\r\n[{'content': '\\n# Task\\nYou\\'re a diligent software engineer AI. You can\\'t see, draw, or interact with a\\nbrowser, but you can read and write files, and you can run commands, and you can think.\\n\\nYou\\'ve been given the following task:\\n\\nbuild a calculator with python\\n\\n## Plan\\nAs you complete this task, you\\'re building a plan and keeping\\ntrack of your progress. Here\\'s a JSON representation of your plan:\\n\\n{\\n \"id\": \"0\",\\n \"goal\": \"build a calculator with python\",\\n \"state\": \"open\",\\n \"subtasks\": []\\n}\\n\\n\\nYou\\'re not currently working on any tasks. Your next action MUST be to mark a task as in_progress.\\n\\nYou\\'re responsible for managing this plan and the status of tasks in\\nit, by using the `add_task` and `modify_task` actions described below.\\n\\nIf the History below contradicts the state of any of these tasks, you\\nMUST modify the task using the `modify_task` action described below.\\n\\nBe sure NOT to duplicate any tasks. Do NOT use the `add_task` action for\\na task that\\'s already represented. Every task must be represented only once.\\n\\nTasks that are sequential MUST be siblings. They must be added in order\\nto their parent task.\\n\\nIf you mark a task as \\'completed\\', \\'verified\\', or \\'abandoned\\',\\nall non-abandoned subtasks will be marked the same way.\\nSo before closing a task this way, you MUST not only be sure that it has\\nbeen completed successfully--you must ALSO be sure that all its subtasks\\nare ready to be marked the same way.\\n\\nIf, and only if, ALL tasks have already been marked verified,\\nyou MUST respond with the `finish` action.\\n\\n## History\\nHere is a recent history of actions you\\'ve taken in service of this plan,\\nas well as observations you\\'ve made. This only includes the MOST RECENT\\nten actions--more happened before that.\\n\\n[]\\n\\n\\nYour most recent action is at the bottom of that history.\\n\\n## Action\\nWhat is your next thought or action? Your response must be in JSON format.\\n\\nIt must be an object, and it must contain two fields:\\n* `action`, which is one of the actions below\\n* `args`, which is a map of key-value pairs, specifying the arguments for that action\\n\\n* `read` - reads the content of a file. Arguments:\\n * `path` - the path of the file to read\\n* `write` - writes the content to a file. Arguments:\\n * `path` - the path of the file to write\\n * `content` - the content to write to the file\\n* `run` - runs a command on the command line in a Linux shell. Arguments:\\n * `command` - the command to run\\n * `background` - if true, run the command in the background, so that other commands can be run concurrently. Useful for e.g. starting a server. You won\\'t be able to see the logs. You don\\'t need to end the command with `&`, just set this to true.\\n* `kill` - kills a background command\\n * `id` - the ID of the background command to kill\\n* `browse` - opens a web page. Arguments:\\n * `url` - the URL to open\\n* `think` - make a plan, set a goal, or record your thoughts. Arguments:\\n * `thought` - the thought to record\\n* `add_task` - add a task to your plan. Arguments:\\n * `parent` - the ID of the parent task\\n * `goal` - the goal of the task\\n * `subtasks` - a list of subtasks, each of which is a map with a `goal` key.\\n* `modify_task` - close a task. Arguments:\\n * `id` - the ID of the task to close\\n * `state` - set to \\'in_progress\\' to start the task, \\'completed\\' to finish it, \\'verified\\' to assert that it was successful, \\'abandoned\\' to give up on it permanently, or `open` to stop working on it for now.\\n* `finish` - if ALL of your tasks and subtasks have been verified or abandoned, and you\\'re absolutely certain that you\\'ve completed your task and have tested your work, use the finish action to stop working.\\n\\nYou MUST take time to think in between read, write, run, browse, and recall actions.\\nYou should never act twice in a row without thinking. But if your last several\\nactions are all `think` actions, you should consider taking a different action.\\n\\nWhat is your next thought or action? Again, you must reply with JSON, and only with JSON.\\n\\nYou\\'re not currently working on any tasks. Your next action MUST be to mark a task as in_progress.\\n', 'role': 'user'}]\r\n```\r\n\r\nllm response is empty\r\n\r\n#### Additional Context\r\nI also tried to use gpt-4 and got the same result.\r\n", "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/960", "file_loc": "{'base_commit': '707ab7b3f84fb5664ff63da0b52e7b0d2e4df545', 'files': [{'path': 'opendevin/config.py', 'status': 'modified', 'Loc': {\"(None, 'get_all', 78)\": {'mod': [78, 82]}}}, {'path': 'opendevin/server/agent/manager.py', 'status': 'modified', 'Loc': {\"('AgentManager', 'create_controller', 93)\": {'mod': [107, 108, 109, 110, 111]}}}, {'path': 'opendevin/server/listen.py', 'status': 'modified', 'Loc': {\"(None, 'read_default_model', 114)\": {'mod': [115]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["opendevin/config.py", "opendevin/server/agent/manager.py", "opendevin/server/listen.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "59f4d299b6ae3232a1d8fe5d5d9652bffa17a728", "is_iss": 0, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/809", "iss_label": "", "title": "facerec_from_webcam_multiprocessing.py run Global is not defined", "body": "* face_recognition version: 1.23\r\n* Python version: 3.6.6\r\n* Operating System: windows 10\r\n\r\n### Description\r\n![image](https://user-images.githubusercontent.com/2375460/56864177-e74b0900-69f1-11e9-9cad-d44cc8ca9d3d.png)\r\n\r\n\r\n### What I Did\r\n\r\n```\r\nfacerec_from_webcam_multiprocessing.py run Global is not defined. pls fix it, thanks\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ageitgey/face_recognition/pull/905", "commit_html_url": null, "file_loc": "{'base_commit': '59f4d299b6ae3232a1d8fe5d5d9652bffa17a728', 'files': [{'path': 'examples/facerec_from_webcam_multiprocessing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 113], 'mod': [3, 125, 130, 131, 154, 189]}, \"(None, 'next_id', 17)\": {'mod': [17]}, \"(None, 'prev_id', 25)\": {'mod': [25]}, \"(None, 'capture', 33)\": {'mod': [33, 43, 47]}, \"(None, 'process', 56)\": {'mod': [56, 62, 72, 109]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/facerec_from_webcam_multiprocessing.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "59f4d299b6ae3232a1d8fe5d5d9652bffa17a728", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/809", "iss_label": "", "title": "facerec_from_webcam_multiprocessing.py run Global is not defined", "body": "* face_recognition version: 1.23\r\n* Python version: 3.6.6\r\n* Operating System: windows 10\r\n\r\n### Description\r\n![image](https://user-images.githubusercontent.com/2375460/56864177-e74b0900-69f1-11e9-9cad-d44cc8ca9d3d.png)\r\n\r\n\r\n### What I Did\r\n\r\n```\r\nfacerec_from_webcam_multiprocessing.py run Global is not defined. pls fix it, thanks\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ageitgey/face_recognition/pull/905", "commit_html_url": null, "file_loc": "{'base_commit': '59f4d299b6ae3232a1d8fe5d5d9652bffa17a728', 'files': [{'path': 'examples/facerec_from_webcam_multiprocessing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 113], 'mod': [3, 125, 130, 131, 154, 189]}, \"(None, 'next_id', 17)\": {'mod': [17]}, \"(None, 'prev_id', 25)\": {'mod': [25]}, \"(None, 'capture', 33)\": {'mod': [33, 43, 47]}, \"(None, 'process', 56)\": {'mod': [56, 62, 72, 109]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["examples/facerec_from_webcam_multiprocessing.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "dc86509b44b3fb0cd9a1a6d6ed564b082dc50848", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/26139", "iss_label": "Docs\nIO HDF5", "title": "Doc for HDFStore compression unclear on what the default value of None does", "body": "The doc for the `HDFStore` class mentions:\r\n\r\n``` \r\ncomplevel : int, 0-9, default None\r\n Specifies a compression level for data.\r\n A value of 0 disables compression.\r\n```\r\n\r\nThat doesn't actually answer the question of what compression level is used when the default (None) is used, though. Is None translated further down to 0? it turns out yes, but you have to dig in the code to actually figure that out. And it could as well have been translated eventually to any other value.\r\n\r\nTwo options:\r\n1. Actually change the default in the `complevel` argument to be \"0\". (It's an immutable object, so it's fine as a default value for a function argument.)\r\n2. Just adjust the doc in some way.\r\n\r\nWhen the right solution is decided, I can do a pull request with it. Thanks!", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/26158", "file_loc": "{'base_commit': 'dc86509b44b3fb0cd9a1a6d6ed564b082dc50848', 'files': [{'path': 'pandas/io/pytables.py', 'status': 'modified', 'Loc': {\"('HDFStore', None, 401)\": {'mod': [425]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["pandas/io/pytables.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "0949d2e77022ad69cc07d4b25a858a7e023503ac", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/1207", "iss_label": "", "title": "git push upstream branch does not exist, wrong command recommended first", "body": "\r\n\r\n\r\n\r\nRecently I noticed a change in a `thefuck` behavior that I use very regularly which I wanted to call out as what I think is an unwanted change. This was introduced very recently, I believe with the 3.31 release. When using `git push` on a git repository where the branch does not exist in the upstream repository, `git` responds with a specific command one should run to create the upstream branch. Prior to version 3.31, `thefuck` seemed to recognize this and made the first suggested Corrected Command was the one `git` recommended. As of version 3.31, `thefuck` instead puts a generic `git push --no-verify` command first, and the one `git` recommended is instead the second result.\r\n\r\nIn this case where `git` recommends a specific command, `git push --no-verify` doesn't actually help or do what the user wants; you need the `git push --set-upstream origin branch-name` command which `thefuck` now arrives at second. Because of the inconvenience for this particular case, combined with the fact that the first option recommended by `thefuck` isn't functionally valid, the prior behavior is more correct for this particular case.\r\n\r\nBelow is all the debug information requested in the issue template:\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.31 using Python 3.9.5 and ZSH 5.8\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n Arch Linux\r\n\r\nHow to reproduce the bug:\r\n\r\n - In a git repo, create a branch which does not exist in the upstream repository\r\n - Attempt to push the branch with `git push`\r\n - You should see an error message saying \"fatal: The current branch branch-name has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin branch-name\"\r\n - invoke `thefuck`\r\n - Prior to 3.31, `thefuck` would present as the first option the exact command which git tells you to use (git push --set-upstream origin branch-name).\r\n - As of 3.31, `thefuck` instead presents as the first option a more generic `git push --no-verify`, and git's recommended command is the second result.\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\nhttps://pastebin.com/qpyEcreC\r\n\r\nIf the bug only appears with a specific application, the output of that application and its version:\r\n\r\n git version 2.32.0\r\n\r\nAnything else you think is relevant:\r\n\r\n N/A\r\n\r\n\r\n", "pr_html_url": "https://github.com/nvbn/thefuck/pull/1208", "file_loc": "{'base_commit': '0949d2e77022ad69cc07d4b25a858a7e023503ac', 'files': [{'path': 'thefuck/rules/git_hook_bypass.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [26]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/rules/git_hook_bypass.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "b8a43011e75da4353b0d5ef314c96cb1276f12f0", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/3893", "iss_label": "", "title": "[Bug] 1.7.1 not support 1.6.0 script", "body": "Hello All,\r\n\r\nMy spider is created by scrapy 1.6.0.\r\nThese days, the scrapy updated to 1.7.1, and we found that it cannot support the code build by 1.6.0.\r\n\r\nHere is the error:\r\n```\r\nTraceback (most recent call last):\r\n File \"/usr/bin/scrapy\", line 6, in \r\n from scrapy.cmdline import execute\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/cmdline.py\", line 10, in \r\n from scrapy.crawler import CrawlerProcess\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/crawler.py\", line 11, in \r\n from scrapy.core.engine import ExecutionEngine\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/core/engine.py\", line 14, in \r\n from scrapy.core.scraper import Scraper\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/core/scraper.py\", line 18, in \r\n from scrapy.core.spidermw import SpiderMiddlewareManager\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/core/spidermw.py\", line 13, in \r\n from scrapy.utils.conf import build_component_list\r\n File \"/usr/lib64/python2.7/site-packages/scrapy/utils/conf.py\", line 4, in \r\n import configparser\r\nImportError: No module named configparser\r\n```\r\n\r\nWould you please take time to check the issue?\r\n\r\nAppreciate for your help in advance.\r\n\r\nThank you.", "pr_html_url": "https://github.com/scrapy/scrapy/pull/3896", "file_loc": "{'base_commit': 'b8a43011e75da4353b0d5ef314c96cb1276f12f0', 'files': [{'path': 'scrapy/utils/conf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7], 'mod': [4]}, \"(None, 'get_config', 94)\": {'mod': [97]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": ""}, "loctype": {"code": ["scrapy/utils/conf.py"], "doc": [], "test": [], "config": [], "asset": []}} @@ -160,7 +160,7 @@ {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "0958cc333ee13d1ce5216ae0bdeaa53b5eacc6ea", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1059", "iss_label": "bug", "title": "ReadTimeout when using local LLM", "body": "**Bug description**\r\nWhen hosting the following model; https://huggingface.co/oobabooga/CodeBooga-34B-v0.1 locally using LMStudio 0.2.14 on Linux Mint 21.3 Cinnamon I am sometimes (usually after several iterations when the context gets large) confronted with a ReadTimeout.\r\n\r\nMetaGPT main branch, commit id: adb42f4, it reports version: 0.7.4 with pip show metagpt. Used Python 3.9.18.\r\n\r\nI used the following code to try out MetaGPT\r\n```\r\nimport asyncio\r\nfrom metagpt.roles.di.data_interpreter import DataInterpreter\r\n\r\nasync def main(requirement: str = \"\"):\r\n di = DataInterpreter()\r\n await di.run(requirement)\r\n\r\nif __name__ == \"__main__\":\r\n requirement = \"Create a dnd 5th edition graph displaying xp per level based on information from a reputable source determined by Googling. First write results in a CSV and validate the CSV contains multiple records. If the file does not contain records, determine if you can fix the code or whether you need to look at another source. After the CSV files is filled with records, create the graph based on this.\"\r\n\r\n asyncio.run(main(requirement))\r\n```\r\nI got the below exception\r\n```\r\nTraceback (most recent call last):\r\n File \"metagpt/lib/python3.9/site-packages/httpx/_transports/default.py\", line 69, in map_httpcore_exceptions\r\n yield\r\n File \"metagpt/lib/python3.9/site-packages/httpx/_transports/default.py\", line 254, in __aiter__\r\n async for part in self._httpcore_stream:\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/connection_pool.py\", line 367, in __aiter__\r\n raise exc from None\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/connection_pool.py\", line 363, in __aiter__\r\n async for part in self._stream:\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py\", line 349, in __aiter__\r\n raise exc\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py\", line 341, in __aiter__\r\n async for chunk in self._connection._receive_response_body(**kwargs):\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py\", line 210, in _receive_response_body\r\n event = await self._receive_event(timeout=timeout)\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py\", line 224, in _receive_event\r\n data = await self._network_stream.read(\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_backends/anyio.py\", line 36, in read\r\n return b\"\"\r\n File \"3.9.18/lib/python3.9/contextlib.py\", line 137, in __exit__\r\n self.gen.throw(typ, value, traceback)\r\n File \"metagpt/lib/python3.9/site-packages/httpcore/_exceptions.py\", line 14, in map_exceptions\r\n raise to_exc(exc) from exc\r\nhttpcore.ReadTimeout\r\n```\r\n**Bug solved method**\r\n\r\nIt would be nice if the timeout and retries are configurable to avoid this issue (for example like AutoGen does this in the LLM API configuration). N.b. I've tried larger local models in the past (for which disk swapping was required due to memory constraints). Those models can sometimes take more than an hour to respond. The model for which this bug is registered can fit in my CPU RAM (64Gb).", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/1060", "file_loc": "{'base_commit': '0958cc333ee13d1ce5216ae0bdeaa53b5eacc6ea', 'files': [{'path': 'config/config2.example.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6]}}}, {'path': 'metagpt/actions/action_node.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, \"('ActionNode', '_aask_v1', 411)\": {'mod': [419]}, \"('ActionNode', None, 122)\": {'mod': [451]}, \"('ActionNode', 'fill', 468)\": {'mod': [476]}}}, {'path': 'metagpt/configs/llm_config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12]}, \"('LLMConfig', 'check_llm_key', 87)\": {'add': [90]}, \"('LLMConfig', None, 38)\": {'mod': [77]}}}, {'path': 'metagpt/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [134], 'mod': [126]}}}, {'path': 'metagpt/provider/anthropic_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, \"('AnthropicLLM', None, 14)\": {'mod': [44, 49, 50, 52]}}}, {'path': 'metagpt/provider/base_llm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25]}, \"('BaseLLM', 'with_model', 257)\": {'add': [260]}, \"('BaseLLM', 'aask', 127)\": {'mod': [133, 149]}, \"('BaseLLM', None, 32)\": {'mod': [155, 165, 169, 173, 184, 194]}, \"('BaseLLM', 'aask_batch', 155)\": {'mod': [161]}, \"('BaseLLM', 'acompletion_text', 194)\": {'mod': [197, 198]}}}, {'path': 'metagpt/provider/dashscope_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27]}, \"('DashScopeLLM', None, 152)\": {'mod': [205, 211, 212, 214]}}}, {'path': 'metagpt/provider/general_api_base.py', 'status': 'modified', 'Loc': {\"('APIRequestor', 'arequest_raw', 556)\": {'mod': [576]}}}, {'path': 'metagpt/provider/google_gemini_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}, \"('GeminiLLM', None, 41)\": {'mod': [126, 132, 133, 135]}}}, {'path': 'metagpt/provider/human_provider.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}, \"('HumanProvider', None, 13)\": {'mod': [21, 38, 41, 45, 48]}, \"('HumanProvider', 'aask', 28)\": {'mod': [34, 36]}}}, {'path': 'metagpt/provider/ollama_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8]}, \"('OllamaLLM', None, 17)\": {'mod': [53, 65, 66, 68]}, \"('OllamaLLM', '_achat_completion', 53)\": {'mod': [58]}, \"('OllamaLLM', '_achat_completion_stream', 68)\": {'mod': [74]}}}, {'path': 'metagpt/provider/openai_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27]}, \"('OpenAILLM', None, 43)\": {'mod': [77, 107, 121, 122, 127, 128, 137, 154]}, \"('OpenAILLM', '_achat_completion_stream', 77)\": {'mod': [79]}, \"('OpenAILLM', '_cons_kwargs', 107)\": {'mod': [115]}, \"('OpenAILLM', 'acompletion_text', 137)\": {'mod': [142]}, \"('OpenAILLM', '_achat_completion_function', 145)\": {'mod': [146, 149]}}}, {'path': 'metagpt/provider/qianfan_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}, \"('QianFanLLM', None, 23)\": {'mod': [110, 115, 116, 118]}}}, {'path': 'metagpt/provider/spark_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, \"('SparkLLM', None, 26)\": {'mod': [34, 37, 43, 46]}}}, {'path': 'metagpt/provider/zhipuai_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, \"('ZhiPuAILLM', None, 26)\": {'mod': [48, 54, 60, 61, 63]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [37]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/configs/llm_config.py", "metagpt/provider/zhipuai_api.py", "metagpt/provider/ollama_api.py", "metagpt/provider/spark_api.py", "metagpt/provider/anthropic_api.py", "metagpt/provider/qianfan_api.py", "metagpt/actions/action_node.py", "metagpt/provider/openai_api.py", "metagpt/provider/google_gemini_api.py", "metagpt/provider/dashscope_api.py", "metagpt/provider/base_llm.py", "metagpt/provider/general_api_base.py", "metagpt/const.py", "metagpt/provider/human_provider.py"], "doc": [], "test": [], "config": ["config/config2.example.yaml", "requirements.txt"], "asset": []}} {"organization": "Textualize", "repo_name": "rich", "base_commit": "b5f0b743a7f50c72199eb792cd6e70730b60651f", "iss_has_pr": 1, "iss_html_url": "https://github.com/Textualize/rich/issues/2047", "iss_label": "Needs triage", "title": "[BUG] printing -\\n- in rich.progress context manager will kill the jupyter.", "body": "try this code in the jupyter notebook:\r\n\r\n```python\r\nfrom rich.progress import Progress\r\nwith Progress() as progress:\r\n print(\"-\\n-\")\r\nprint(\"finished\")\r\n```\r\nand it will show a popup message displaying that the kernel has died.\r\nI have tested it on google colab and mint.\r\n\r\nalso, I have installed rich using\r\n```\r\npip install rich[jupyter]\r\n```", "pr_html_url": "https://github.com/Textualize/rich/pull/2209", "file_loc": "{'base_commit': 'b5f0b743a7f50c72199eb792cd6e70730b60651f', 'files': [{'path': 'CHANGELOG.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}}}, {'path': 'rich/file_proxy.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2]}, \"('FileProxy', 'flush', 50)\": {'mod': [51, 52, 53, 54]}}}, {'path': 'tests/test_file_proxy.py', 'status': 'modified', 'Loc': {\"(None, 'test_flush', 20)\": {'add': [27]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["rich/file_proxy.py"], "doc": ["CHANGELOG.md"], "test": ["tests/test_file_proxy.py"], "config": [], "asset": []}} {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "439c19596a248a31cd1aa8220f54a622a0322160", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/3689", "iss_label": "", "title": "using sparse matrix in fit_params", "body": "When the value of a fit_params is sparse matrix, it will raise error from the following code.\nsklearn/cross_validation.py\n\n```\n1224 if hasattr(v, '__len__') and len(v) == n_samples else v)\n1225 for k, v in fit_params.items()])\n```\n\nIt is because the `__len__` of sparse matrix is defined as\nscipy/sparse/base.py\n\n```\n190 def __len__(self):\n191 # return self.getnnz()\n192 raise TypeError(\"sparse matrix length is ambiguous; use getnnz()\"\n193 \" or shape[0]\")\n```\n\nIs there anyway to circumpass this issue. I do not want to convert the sparse matrix into a dense one, since it will consume a big memory.\n", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/4049", "file_loc": "{'base_commit': '439c19596a248a31cd1aa8220f54a622a0322160', 'files': [{'path': 'sklearn/cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1073]}, \"(None, '_fit_and_predict', 1150)\": {'mod': [1186, 1188, 1189, 1190]}, \"(None, '_fit_and_score', 1305)\": {'mod': [1379, 1381, 1382, 1383, 1384, 1385]}}}, {'path': 'sklearn/tests/test_cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1108]}, \"(None, 'assert_fit_params', 595)\": {'mod': [596]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/cross_validation.py"], "doc": [], "test": ["sklearn/tests/test_cross_validation.py"], "config": [], "asset": []}} -{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "adc1e590d4dc1e230b49a4c10b4cd7b672bb3d69", "is_iss": 0, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/9174", "iss_label": "Bug\nhelp wanted", "title": "SVC and OneVsOneClassifier decision_function inconsistent on sub-sample", "body": "Hi,\r\n\r\nI'm seeing inconsistent numerical results with SVC's decision_function.\r\nWhen estimated over an entire batch of samples ( (n_samples, n_features) matrix ) compared to analyzing sample-by-sample, the results are not the same.\r\nThis is true for both the individual numerical values per sample and the overall distribution of the results.\r\n\r\n**The model is SVC with RBF kernel, for a 3-class classification:**\r\n```\r\nSVC(C=1.0, gamma=0.007, class_weight = new_class_weight, probability = True, random_state = 30, \r\ndecision_function_shape = 'ovr')\r\n```\r\n\r\n**The models are loaded from file:**\r\n\r\n`ML = joblib.load(\"model.pkl\")`\r\n\r\n**Option A, analyze a matrix:**\r\n\r\n`distances = ML.decision_function(X)`\r\n\r\n**Option B, analyze individual samples:** \r\n```\r\ndistances = numpy.zeros([X.shape[0], 3])\r\nfor i in range(X.shape[0]): \r\n distances[i,:]` = ML.decision_function(X[i,:].reshape(1,-1))\r\n```\r\n\r\n**Output for first two samples:**\r\n**Option A:**\r\nsample 1: [ 0.90835588, -0.17305875, 2.26470288]\r\nsample 2: [ 1.10437313, -0.2371539 , 2.13278077]\r\n\r\n**Option B:**\r\nsample 1: [ 0.82689247, -0.32689247, 2.5 ]\r\nsample 2: [ 1.22005359, -0.5 , 2.27994641]\r\n\r\nI couldn't find any indication for this behavior in the documentation.\r\n\r\nWindows-10-10.0.15063-SP0\r\nPython 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]\r\nNumPy 1.12.1\r\nSciPy 0.18.1\r\nScikit-Learn 0.18.1\r\n\r\nThanks!\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10440", "commit_html_url": null, "file_loc": "{'base_commit': 'adc1e590d4dc1e230b49a4c10b4cd7b672bb3d69', 'files': [{'path': 'doc/modules/multiclass.rst', 'status': 'modified', 'Loc': {'(None, None, 230)': {'mod': [230]}}}, {'path': 'doc/modules/svm.rst', 'status': 'modified', 'Loc': {'(None, None, 116)': {'mod': [116]}, '(None, None, 118)': {'mod': [118]}}}, {'path': 'doc/whats_new/v0.21.rst', 'status': 'modified', 'Loc': {'(None, None, 26)': {'add': [26]}, '(None, None, 353)': {'add': [353]}}}, {'path': 'sklearn/svm/base.py', 'status': 'modified', 'Loc': {\"('BaseSVC', 'decision_function', 527)\": {'add': [549]}}}, {'path': 'sklearn/utils/estimator_checks.py', 'status': 'modified', 'Loc': {\"(None, 'check_methods_subset_invariance', 815)\": {'mod': [839, 840]}}}, {'path': 'sklearn/utils/multiclass.py', 'status': 'modified', 'Loc': {\"(None, '_ovr_decision_function', 402)\": {'mod': [434, 435, 437, 438, 440, 444, 445, 446, 447]}}}, {'path': 'sklearn/utils/tests/test_multiclass.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18, 25]}, \"(None, 'test_safe_split_with_precomputed_kernel', 361)\": {'add': [380]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/utils/multiclass.py", "sklearn/utils/estimator_checks.py", "sklearn/svm/base.py"], "doc": ["doc/whats_new/v0.21.rst", "doc/modules/multiclass.rst", "doc/modules/svm.rst"], "test": ["sklearn/utils/tests/test_multiclass.py"], "config": [], "asset": []}} +{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "adc1e590d4dc1e230b49a4c10b4cd7b672bb3d69", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/9174", "iss_label": "Bug\nhelp wanted", "title": "SVC and OneVsOneClassifier decision_function inconsistent on sub-sample", "body": "Hi,\r\n\r\nI'm seeing inconsistent numerical results with SVC's decision_function.\r\nWhen estimated over an entire batch of samples ( (n_samples, n_features) matrix ) compared to analyzing sample-by-sample, the results are not the same.\r\nThis is true for both the individual numerical values per sample and the overall distribution of the results.\r\n\r\n**The model is SVC with RBF kernel, for a 3-class classification:**\r\n```\r\nSVC(C=1.0, gamma=0.007, class_weight = new_class_weight, probability = True, random_state = 30, \r\ndecision_function_shape = 'ovr')\r\n```\r\n\r\n**The models are loaded from file:**\r\n\r\n`ML = joblib.load(\"model.pkl\")`\r\n\r\n**Option A, analyze a matrix:**\r\n\r\n`distances = ML.decision_function(X)`\r\n\r\n**Option B, analyze individual samples:** \r\n```\r\ndistances = numpy.zeros([X.shape[0], 3])\r\nfor i in range(X.shape[0]): \r\n distances[i,:]` = ML.decision_function(X[i,:].reshape(1,-1))\r\n```\r\n\r\n**Output for first two samples:**\r\n**Option A:**\r\nsample 1: [ 0.90835588, -0.17305875, 2.26470288]\r\nsample 2: [ 1.10437313, -0.2371539 , 2.13278077]\r\n\r\n**Option B:**\r\nsample 1: [ 0.82689247, -0.32689247, 2.5 ]\r\nsample 2: [ 1.22005359, -0.5 , 2.27994641]\r\n\r\nI couldn't find any indication for this behavior in the documentation.\r\n\r\nWindows-10-10.0.15063-SP0\r\nPython 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]\r\nNumPy 1.12.1\r\nSciPy 0.18.1\r\nScikit-Learn 0.18.1\r\n\r\nThanks!\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10440", "commit_html_url": null, "file_loc": "{'base_commit': 'adc1e590d4dc1e230b49a4c10b4cd7b672bb3d69', 'files': [{'path': 'doc/modules/multiclass.rst', 'status': 'modified', 'Loc': {'(None, None, 230)': {'mod': [230]}}}, {'path': 'doc/modules/svm.rst', 'status': 'modified', 'Loc': {'(None, None, 116)': {'mod': [116]}, '(None, None, 118)': {'mod': [118]}}}, {'path': 'doc/whats_new/v0.21.rst', 'status': 'modified', 'Loc': {'(None, None, 26)': {'add': [26]}, '(None, None, 353)': {'add': [353]}}}, {'path': 'sklearn/svm/base.py', 'status': 'modified', 'Loc': {\"('BaseSVC', 'decision_function', 527)\": {'add': [549]}}}, {'path': 'sklearn/utils/estimator_checks.py', 'status': 'modified', 'Loc': {\"(None, 'check_methods_subset_invariance', 815)\": {'mod': [839, 840]}}}, {'path': 'sklearn/utils/multiclass.py', 'status': 'modified', 'Loc': {\"(None, '_ovr_decision_function', 402)\": {'mod': [434, 435, 437, 438, 440, 444, 445, 446, 447]}}}, {'path': 'sklearn/utils/tests/test_multiclass.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18, 25]}, \"(None, 'test_safe_split_with_precomputed_kernel', 361)\": {'add': [380]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/utils/multiclass.py", "sklearn/utils/estimator_checks.py", "sklearn/svm/base.py"], "doc": ["doc/whats_new/v0.21.rst", "doc/modules/multiclass.rst", "doc/modules/svm.rst"], "test": ["sklearn/utils/tests/test_multiclass.py"], "config": [], "asset": []}} {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "da90449edfa13b5be1550b3acc212dbf3a8c6e69", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/1064", "iss_label": "", "title": "allow spiders to return dicts instead of Items", "body": "In many cases the requirement to define and yield Items from a spider is an unnecessary complication. \n\nAn example from Scrapy tutorial:\n\n```\nimport scrapy\n\nclass DmozItem(scrapy.Item):\n title = scrapy.Field()\n link = scrapy.Field()\n desc = scrapy.Field()\n\nclass DmozSpider(scrapy.Spider):\n name = \"dmoz\"\n allowed_domains = [\"dmoz.org\"]\n start_urls = [\n \"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/\",\n \"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/\"\n ]\n\n def parse(self, response):\n for sel in response.xpath('//ul/li'):\n item = DmozItem()\n item['title'] = sel.xpath('a/text()').extract()\n item['link'] = sel.xpath('a/@href').extract()\n item['desc'] = sel.xpath('text()').extract()\n yield item\n```\n\nIt can be made simpler with dicts instead of Items:\n\n```\nimport scrapy\n\nclass DmozSpider(scrapy.Spider):\n name = \"dmoz\"\n allowed_domains = [\"dmoz.org\"]\n start_urls = [\n \"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/\",\n \"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/\"\n ]\n\n def parse(self, response):\n for sel in response.xpath('//ul/li'):\n yield {\n 'title': sel.xpath('a/text()').extract(),\n 'link': sel.xpath('a/@href').extract(),\n 'desc': sel.xpath('text()').extract(),\n }\n```\n\nThe version with dicts gives a developer less concepts to learn, and it is easier to explain.\n\nWhen field metadata is not used and data is exported to JSON/XML yielding Python dicts should be enough. Even when you export to CSV dicts could be enough - columns can be set explicitly by an user.\n\nThis should also prevent tickets like https://github.com/scrapy/scrapy/issues/968.\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/1081", "file_loc": "{'base_commit': 'da90449edfa13b5be1550b3acc212dbf3a8c6e69', 'files': [{'path': 'docs/index.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [61, 86], 'mod': [59, 75, 76]}}}, {'path': 'docs/topics/architecture.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [105, 108]}}}, {'path': 'docs/topics/exporters.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [199, 205], 'mod': [10, 93, 94, 95, 170, 171]}}}, {'path': 'docs/topics/images.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [66, 67, 68]}}}, {'path': 'docs/topics/item-pipeline.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [137], 'mod': [11, 12, 31, 32, 36, 158, 159]}}}, {'path': 'docs/topics/items.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13], 'mod': [11, 12, 16, 67]}}}, {'path': 'docs/topics/practices.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [187, 189, 190, 192, 193, 194, 196, 199, 201, 202, 204]}}}, {'path': 'docs/topics/signals.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74, 94]}}}, {'path': 'docs/topics/spider-middleware.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [93, 100, 101, 113]}}}, {'path': 'docs/topics/spiders.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [284, 290], 'mod': [27, 28, 44, 46, 47, 49, 50, 51, 52, 54, 55, 57, 59, 61, 63, 64, 66, 67, 68, 69, 71, 72, 74, 76, 77, 79, 80, 81, 82, 84, 85, 87, 89, 90, 91, 92, 98, 99, 106, 107, 201, 202, 203, 204, 234, 251, 252, 271, 274]}}}, {'path': 'scrapy/commands/parse.py', 'status': 'modified', 'Loc': {\"('Command', 'run_callback', 106)\": {'mod': [110]}}}, {'path': 'scrapy/contracts/default.py', 'status': 'modified', 'Loc': {\"('ReturnsContract', None, 21)\": {'mod': [38, 39]}, \"('ScrapesContract', 'post_process', 84)\": {'mod': [86]}}}, {'path': 'scrapy/contrib/exporter/__init__.py', 'status': 'modified', 'Loc': {\"('BaseItemExporter', '_get_serialized_fields', 52)\": {'mod': [53, 54, 59, 67, 68, 69, 72]}, \"('CsvItemExporter', '_write_headers_and_set_fields_to_export', 191)\": {'mod': [194]}}}, {'path': 'scrapy/contrib/pipeline/files.py', 'status': 'modified', 'Loc': {\"('FilesPipeline', 'item_completed', 269)\": {'mod': [270]}}}, {'path': 'scrapy/contrib/pipeline/images.py', 'status': 'modified', 'Loc': {\"('ImagesPipeline', 'item_completed', 111)\": {'mod': [112]}}}, {'path': 'scrapy/core/scraper.py', 'status': 'modified', 'Loc': {\"('Scraper', '_process_spidermw_output', 171)\": {'mod': [177, 186]}}}, {'path': 'tests/spiders.py', 'status': 'modified', 'Loc': {\"('ItemSpider', 'parse', 84)\": {'add': [87]}}}, {'path': 'tests/test_commands.py', 'status': 'modified', 'Loc': {\"('RunSpiderCommandTest', 'test_runspider', 132)\": {'add': [137], 'mod': [139, 141]}, '(None, None, None)': {'add': [241]}, \"('ParseCommandTest', 'setUp', 188)\": {'mod': [195, 196, 198, 204]}}}, {'path': 'tests/test_contracts.py', 'status': 'modified', 'Loc': {\"('TestSpider', None, 25)\": {'add': [41, 48, 56, 64]}, \"('ContractsManagerTest', 'test_returns', 104)\": {'add': [112]}, \"('ContractsManagerTest', None, 72)\": {'add': [122]}, \"('ContractsManagerTest', 'test_scrapes', 123)\": {'add': [131, 136]}}}, {'path': 'tests/test_contrib_exporter.py', 'status': 'modified', 'Loc': {\"('BaseItemExporterTest', None, 18)\": {'add': [45], 'mod': [36]}, \"('XmlItemExporterTest', None, 196)\": {'add': [213]}, '(None, None, None)': {'add': [327], 'mod': [1, 5, 9, 10, 11]}, \"('BaseItemExporterTest', 'test_export_item', 36)\": {'mod': [39]}, \"('BaseItemExporterTest', 'test_serialize_field', 46)\": {'mod': [47, 48, 49, 50]}, \"('PythonItemExporterTest', 'test_nested_item', 79)\": {'mod': [81]}, \"('CsvItemExporterTest', None, 140)\": {'mod': [153, 154, 155]}, \"('CsvItemExporterTest', 'test_header', 153)\": {'mod': [157, 158, 159, 161, 162, 163, 164, 165, 166, 168, 169, 170, 171, 172, 173, 174, 176, 177, 178, 179, 181]}, \"('CsvItemExporterTest', 'test_join_multivalue', 183)\": {'mod': [188, 189, 190, 191, 192, 193, 194]}, \"('XmlItemExporterTest', 'test_multivalued_fields', 218)\": {'mod': [219, 220, 221, 222, 223, 224, 225, 226]}, \"('XmlItemExporterTest', 'test_nested_item', 228)\": {'mod': [229, 231, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248]}, \"('XmlItemExporterTest', 'test_nested_list_item', 250)\": {'mod': [251, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267]}, \"('JsonLinesItemExporterTest', 'test_nested_item', 281)\": {'mod': [283]}, \"('JsonItemExporterTest', None, 298)\": {'mod': [309]}, \"('JsonItemExporterTest', 'test_two_items', 309)\": {'mod': [311, 312, 315]}, \"('CustomItemExporter', 'serialize_field', 332)\": {'mod': [336, 337]}, \"('CustomItemExporterTest', 'test_exporter_custom_serializer', 330)\": {'mod': [342, 343, 344, 345]}}}, {'path': 'tests/test_engine.py', 'status': 'modified', 'Loc': {\"('TestSpider', None, 36)\": {'add': [43]}, '(None, None, None)': {'add': [67]}, \"('TestSpider', 'parse_item', 51)\": {'mod': [52]}, \"('CrawlerRun', None, 81)\": {'mod': [84]}, \"('CrawlerRun', '__init__', 84)\": {'mod': [91, 92]}, \"('EngineTest', 'test_crawler', 154)\": {'mod': [155, 156, 157, 158, 159, 160, 161, 162]}}}, {'path': 'tests/test_pipeline_files.py', 'status': 'modified', 'Loc': {\"('FilesPipelineTestCaseFields', 'test_item_fields_default', 144)\": {'mod': [145, 150, 151, 152, 153, 154, 155, 156, 157]}, \"('FilesPipelineTestCaseFields', 'test_item_fields_override_settings', 159)\": {'mod': [160, 165, 166, 167, 168, 169, 170, 171, 172, 173]}}}, {'path': 'tests/test_pipeline_images.py', 'status': 'modified', 'Loc': {\"('ImagesPipelineTestCaseFields', 'test_item_fields_default', 170)\": {'mod': [171, 176, 177, 178, 179, 180, 181, 182, 183]}, \"('ImagesPipelineTestCaseFields', 'test_item_fields_override_settings', 185)\": {'mod': [186, 191, 192, 193, 194, 195, 196, 197, 198, 199]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/core/scraper.py", "scrapy/commands/parse.py", "scrapy/contracts/default.py", "scrapy/contrib/exporter/__init__.py", "tests/spiders.py", "scrapy/contrib/pipeline/images.py", "scrapy/contrib/pipeline/files.py"], "doc": ["docs/topics/practices.rst", "docs/topics/signals.rst", "docs/topics/spiders.rst", "docs/topics/architecture.rst", "docs/topics/items.rst", "docs/index.rst", "docs/topics/exporters.rst", "docs/topics/item-pipeline.rst", "docs/topics/images.rst", "docs/topics/spider-middleware.rst"], "test": ["tests/test_contracts.py", "tests/test_engine.py", "tests/test_commands.py", "tests/test_pipeline_files.py", "tests/test_pipeline_images.py", "tests/test_contrib_exporter.py"], "config": [], "asset": []}} {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "effd75dda5f4afa61f988035ff8fe4b3a447464e", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/10059", "iss_label": "", "title": "Duplicated input points silently create duplicated clusters in KMeans", "body": "#### Description\r\nWhen there are duplicated input points to Kmeans resulting to number of unique points < number of requested clusters, there is no error thrown. Instead, clustering continues to (seemingly) produce the number of clusters requested, but some of them are exactly the same, so the cluster labels produced for the input points do not go all the way to number of requested clusters.\r\n\r\n#### Steps/Code to Reproduce\r\n```python\r\nfrom sklearn.cluster import KMeans\r\nimport numpy as np\r\n\r\n# some input points here are identical, so that n_total=17, n_unique=9\r\nx2d = np.array([(1086, 348), (1087, 347), (1190, 244), (1190, 244), (1086, 348), (1185, 249), (1193, 241), (1185, 249), (1087, 347), (1188, 247), (1187, 233), (26, 111), (26, 111), (26, 110), (26, 110), (26, 110), (26, 110)])\r\nkmeans = KMeans(n_clusters=10) # n_clusters > n_unique\r\nc_labels = kmeans.fit_predict(x2d)\r\nc_centers = kmeans.cluster_centers_\r\n```\r\n#### Expected Results\r\nEither an error thrown, or the cluster labels produced should match the unique clusters only (i.e. no identical cluster centres)\r\n\r\n#### Actual Results\r\n```python\r\n>>> c_labels # note there's no entry for cluster 9\r\narray([7, 2, 6, 6, 7, 5, 4, 5, 2, 1, 3, 8, 8, 0, 0, 0, 0], dtype=int32)\r\n>>> c_centers # two of these 10 clusters have identical centers, so only 9 of them are unique\r\narray([[ 26., 110.],\r\n [ 1188., 247.],\r\n [ 1087., 347.],\r\n [ 1187., 233.],\r\n [ 1193., 241.],\r\n [ 1185., 249.],\r\n [ 1190., 244.],\r\n [ 1086., 348.],\r\n [ 26., 111.],\r\n [ 26., 110.]]) \r\n```\r\n\r\n#### Versions\r\n```python\r\nDarwin-16.7.0-x86_64-i386-64bit\r\nPython 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:04:09)\r\n[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]\r\nNumPy 1.13.1\r\nSciPy 0.19.1\r\nScikit-Learn 0.18.2\r\n```", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10099", "file_loc": "{'base_commit': 'effd75dda5f4afa61f988035ff8fe4b3a447464e', 'files': [{'path': 'doc/whats_new/v0.20.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [136]}}}, {'path': 'sklearn/cluster/k_means_.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [34]}, \"(None, 'k_means', 167)\": {'add': [376]}}}, {'path': 'sklearn/cluster/tests/test_k_means.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17, 20]}, \"(None, 'test_sparse_validate_centers', 855)\": {'add': [869]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "code"}, "loctype": {"code": ["sklearn/cluster/k_means_.py"], "doc": ["doc/whats_new/v0.20.rst"], "test": ["sklearn/cluster/tests/test_k_means.py"], "config": [], "asset": []}} {"organization": "scrapy", "repo_name": "scrapy", "base_commit": "0dad0fce72266aa7b38b536f87bab26e7f233c74", "iss_has_pr": 1, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4477", "iss_label": "bug", "title": "is_generator_with_return_value raises IndentationError with a flush left doc string", "body": "### Description\r\n\r\nCode that is accepted by the python interpreter raises when fed through `textwrap.dedent`\r\n\r\n### Steps to Reproduce\r\n\r\n1. Create `is_generator_bug.py` with the content below (which I simplified from [the `is_generator_with_return_value` method body](https://github.com/scrapy/scrapy/blob/2.0.1/scrapy/utils/misc.py#L186-L187)\r\n2. Run `python is_generator_bug.py`\r\n3. Observe the kaboom\r\n\r\n```python\r\nimport ast\r\nimport inspect\r\nfrom textwrap import dedent\r\nclass Bob:\r\n def doit(self):\r\n \"\"\"\r\nthis line is flush left\r\n \"\"\"\r\n if True:\r\n yield 1234\r\n\r\nif __name__ == '__main__':\r\n b = Bob()\r\n c = b.doit\r\n if inspect.isgeneratorfunction(c):\r\n tree = ast.parse(dedent(inspect.getsource(c)))\r\n```\r\n\r\n**Expected behavior:** [What you expect to happen]\r\n\r\nNo Error\r\n\r\n**Actual behavior:** [What actually happens]\r\n\r\n```console\r\n$ python3.7 is_generator_bug.py\r\nTraceback (most recent call last):\r\n File \"is_generator_bug.py\", line 16, in \r\n tree = ast.parse(dedent(inspect.getsource(c)))\r\n File \"/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ast.py\", line 35, in parse\r\n return compile(source, filename, mode, PyCF_ONLY_AST)\r\n File \"\", line 1\r\n def doit(self):\r\n ^\r\nIndentationError: unexpected indent\r\n```\r\n\r\n**Reproduces how often:** [What percentage of the time does it reproduce?]\r\n\r\n100%\r\n\r\n### Versions\r\n\r\n```\r\nScrapy : 2.0.1\r\nlxml : 4.5.0.0\r\nlibxml2 : 2.9.10\r\ncssselect : 1.1.0\r\nparsel : 1.5.2\r\nw3lib : 1.21.0\r\nTwisted : 20.3.0\r\nPython : 3.7.7 (default, Mar 11 2020, 23:30:22) - [Clang 10.0.0 (clang-1000.11.45.5)]\r\npyOpenSSL : 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019)\r\ncryptography : 2.8\r\nPlatform : Darwin-17.7.0-x86_64-i386-64bit\r\n```\r\n\r\n### Additional context\r\n", "pr_html_url": "https://github.com/scrapy/scrapy/pull/4935", "file_loc": "{'base_commit': '0dad0fce72266aa7b38b536f87bab26e7f233c74', 'files': [{'path': 'scrapy/utils/misc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12]}, \"(None, 'is_generator_with_return_value', 217)\": {'mod': [230]}, \"(None, 'warn_on_generator_with_return_value', 240)\": {'mod': [245, 247, 248, 249, 250, 251]}}}, {'path': 'tests/test_utils_misc/test_return_with_argument_inside_generator.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1], 'mod': [3]}, \"('UtilsMiscPy3TestCase', None, 6)\": {'mod': [8, 9]}, \"('UtilsMiscPy3TestCase', 'test_generators_with_return_statements', 8)\": {'mod': [13, 17, 21, 25, 28, 32, 40, 41, 43, 44, 49, 50, 51, 52, 53, 54, 55, 56]}, \"('UtilsMiscPy3TestCase', 'g', 13)\": {'mod': [15]}, \"('UtilsMiscPy3TestCase', 'k', 28)\": {'mod': [30]}, \"('UtilsMiscPy3TestCase', 'n', 40)\": {'mod': [46, 47]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scrapy/utils/misc.py"], "doc": [], "test": ["tests/test_utils_misc/test_return_with_argument_inside_generator.py"], "config": [], "asset": []}} @@ -192,13 +192,13 @@ {"organization": "3b1b", "repo_name": "manim", "base_commit": "dbdd7996960ba46ed044a773290b02f17478c760", "iss_has_pr": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/1065", "iss_label": "", "title": "Example_scenes.py run problem question", "body": "I was able to get the install success thanks to help. I ran the example_scenes.py file and have the results below. I am now also going through https://talkingphysics.wordpress.com/2019/01/08/getting-started-animating-with-manim-and-python-3-7/ and have similar errors when running the first run python -m manim pymanim_tutorial_P37.py Shapes -pl. So I am trying to crawl before walking and would like to get through example_scenes and first tutorial .py run with success so any help is appreciated.\r\n\r\n\r\nC:\\Users\\Admin\\Desktop\\manim-master>python ./manim.py example_scenes.py SquareTo\r\nCircle -pl\r\nMedia will be written to ./media\\. You can change this behavior with the --media\r\n_dir flag.\r\n[concat @ 0000000000375a40] Impossible to open 'CC:/Users/Admin/Desktop/manim-ma\r\nster/media/videos/example_scenes/480p15/partial_movie_files/SquareToCircle/00000\r\n.mp4'\r\nC:\\Users\\Admin\\Desktop\\manim-master\\media\\videos\\example_scenes\\480p15\\partial_m\r\novie_files\\SquareToCircle\\partial_movie_file_list.txt: Protocol not found\r\nDid you mean file:C:\\Users\\Admin\\Desktop\\manim-master\\media\\videos\\example_scene\r\ns\\480p15\\partial_movie_files\\SquareToCircle\\partial_movie_file_list.txt?\r\n\r\nFile ready at C:\\Users\\Admin\\Desktop\\manim-master\\media\\videos\\example_scenes\\48\r\n0p15\\SquareToCircle.mp4\r\n\r\nPlayed 3 animations\r\n\r\n\r\n\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Admin\\Desktop\\manim-master\\manimlib\\extract_scene.py\", line 156\r\n, in main\r\n open_file_if_needed(scene.file_writer, **config)\r\n File \"C:\\Users\\Admin\\Desktop\\manim-master\\manimlib\\extract_scene.py\", line 35,\r\n in open_file_if_needed\r\n os.startfile(file_path)\r\nFileNotFoundError: [WinError 2] The system cannot find the file specified: 'C:\\\\\r\nUsers\\\\Admin\\\\Desktop\\\\manim-master\\\\media\\\\videos\\\\example_scenes\\\\480p15\\\\Squa\r\nreToCircle.mp4'\r\n", "pr_html_url": "https://github.com/3b1b/manim/pull/1057", "file_loc": "{'base_commit': 'dbdd7996960ba46ed044a773290b02f17478c760', 'files': [{'path': 'manimlib/scene/scene_file_writer.py', 'status': 'modified', 'Loc': {\"('SceneFileWriter', 'combine_movie_files', 253)\": {'mod': [289]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["manimlib/scene/scene_file_writer.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "c84aeac6b5695e7e1ac629d17fc51eb68ab91bae", "iss_has_pr": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/502", "iss_label": "external issue", "title": "[youtube] YouTube serving erroneous DASH Manifest VP9 formats", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n\r\n- [x] I'm reporting a broken site support\r\n- [x] I've verified that I'm running yt-dlp version **2021.07.07**\r\n- [x] I've checked that all provided URLs are alive and playable in a browser (but with condition, see below)\r\n- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [x] I've searched the bugtracker for similar issues including closed ones\r\n\r\n\r\n## Verbose log\r\n\r\n```\r\n$ yt-dlp V_h3Z40AAtw -F\r\n[youtube] V_h3Z40AAtw: Downloading webpage\r\n[youtube] V_h3Z40AAtw: Downloading MPD manifest\r\n[info] Available formats for V_h3Z40AAtw:\r\nID EXT RESOLUTION FPS | FILESIZE TBR PROTO | VCODEC VBR ACODEC ABR ASR NOTE\r\n--- ---- ---------- --- - --------- ----- ----- - ----------- ----- --------- ---- ------- ---------------------------------------\r\n139 m4a audio only | 1.52MiB 50k dash | mp4a.40.5 50k 22050Hz DASH audio, m4a_dash, 22050Hz\r\n140 m4a audio only | 4.02MiB 129k https | mp4a.40.2 129k 44100Hz audio_quality_medium, m4a_dash, 44100Hz\r\n160 mp4 256x144 30 | 108k dash | avc1.4d400b 108k DASH video, mp4_dash\r\n278 webm 256x144 30 | 95k dash | vp9 95k DASH video, webm_dash\r\n133 mp4 426x240 30 | 242k dash | avc1.4d400c 242k DASH video, mp4_dash\r\n242 webm 426x240 30 | 220k dash | vp9 220k DASH video, webm_dash\r\n134 mp4 640x360 30 | 19.25MiB 620k https | avc1.4d401e 620k 360p, mp4_dash\r\n18 mp4 640x360 30 | 22.68MiB 730k https | avc1.42001E 730k mp4a.40.2 0k 44100Hz 360p, 44100Hz\r\n243 webm 640x360 30 | 405k dash | vp9 405k DASH video, webm_dash\r\n135 mp4 854x480 30 | 1155k dash | avc1.4d400c 1155k DASH video, mp4_dash\r\n244 webm 854x480 30 | 752k dash | vp9 752k DASH video, webm_dash\r\n136 mp4 1280x720 30 | 69.87MiB 2251k https | avc1.4d401f 2251k 720p, mp4_dash\r\n22 mp4 1280x720 30 | 2380k https | avc1.64001F 2380k mp4a.40.2 0k 44100Hz 720p, 44100Hz\r\n247 webm 1280x720 30 | 1505k dash | vp9 1505k DASH video, webm_dash\r\n248 webm 1920x1080 30 | 2646k dash | vp9 2646k DASH video, webm_dash\r\n\r\n$ yt-dlp -v V_h3Z40AAtw\r\n[debug] Command-line config: ['-v', 'V_h3Z40AAtw']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8\r\n[debug] yt-dlp version 2021.07.13.1626134551 (zip)\r\n[debug] Python version 3.9.6 (CPython 64bit) - Linux-5.8.0-41-generic-x86_64-with-glibc2.32\r\n[debug] exe versions: ffmpeg 4.3.1, ffprobe 4.3.1, rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n[debug] [youtube] Extracting URL: V_h3Z40AAtw\r\n[youtube] V_h3Z40AAtw: Downloading webpage\r\n[youtube] [debug] Fetching webpage from https://www.youtube.com/watch?v=V_h3Z40AAtw&bpctr=9999999999&has_verified=1\r\n[youtube] V_h3Z40AAtw: Downloading MPD manifest\r\n[youtube] [debug] Fetching webpage from https://manifest.googlevideo.com/api/manifest/dash/expire/1626251414/ei/NkzuYLz-Ap-0s8IPlf2YyAo/ip/2001%3A19f0%3A7001%3A13a1%3A5400%3A3ff%3Afe11%3A205f/id/57f877678d0002dc/source/youtube/requiressl/yes/playback_host/r2---sn-oguelne7.googlevideo.com/mh/t1/mm/31%2C29/mn/sn-oguelne7%2Csn-oguesnzz/ms/au%2Crdu/mv/m/mvi/2/pl/55/tx/24027688/txs/24027687%2C24027688%2C24027689%2C24027690/hfr/all/as/fmp4_audio_clear%2Cwebm_audio_clear%2Cwebm2_audio_clear%2Cfmp4_sd_hd_clear%2Cwebm2_sd_hd_clear/initcwndbps/1267500/vprv/1/mt/1626229496/fvip/2/keepalive/yes/fexp/24001373%2C24007246/itag/0/sparams/expire%2Cei%2Cip%2Cid%2Csource%2Crequiressl%2Ctx%2Ctxs%2Chfr%2Cas%2Cvprv%2Citag/sig/AOq0QJ8wRAIgel_8rJx7O1ChqaQTDiBI5cysHmZ_4uCmgCWN_kPxy8cCIDfgAFVDYl5WO7a1gLifSDw6vBfjELblxSgkOodOm1am/lsparams/playback_host%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps/lsig/AG3C_xAwRAIgJVqT1xhW2KIhXVIj6cJRomDQ7-UOvq8yyC_J5r7ksfMCIBG0VgIJSIlNe49rl6ty6WA_DuH2AhKJvLOpq8fUBojv\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] V_h3Z40AAtw: Downloading 1 format(s): 248+140\r\n[debug] locking youtube_V_h3Z40AAtw.lock\r\n[debug] Invoking downloader on \"https://manifest.googlevideo.com/api/manifest/dash/expire/1626251414/ei/NkzuYLz-Ap-0s8IPlf2YyAo/ip/2001%3A19f0%3A7001%3A13a1%3A5400%3A3ff%3Afe11%3A205f/id/57f877678d0002dc/source/youtube/requiressl/yes/playback_host/r2---sn-oguelne7.googlevideo.com/mh/t1/mm/31%2C29/mn/sn-oguelne7%2Csn-oguesnzz/ms/au%2Crdu/mv/m/mvi/2/pl/55/tx/24027688/txs/24027687%2C24027688%2C24027689%2C24027690/hfr/all/as/fmp4_audio_clear%2Cwebm_audio_clear%2Cwebm2_audio_clear%2Cfmp4_sd_hd_clear%2Cwebm2_sd_hd_clear/initcwndbps/1267500/vprv/1/mt/1626229496/fvip/2/keepalive/yes/fexp/24001373%2C24007246/itag/0/sparams/expire%2Cei%2Cip%2Cid%2Csource%2Crequiressl%2Ctx%2Ctxs%2Chfr%2Cas%2Cvprv%2Citag/sig/AOq0QJ8wRAIgel_8rJx7O1ChqaQTDiBI5cysHmZ_4uCmgCWN_kPxy8cCIDfgAFVDYl5WO7a1gLifSDw6vBfjELblxSgkOodOm1am/lsparams/playback_host%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps/lsig/AG3C_xAwRAIgJVqT1xhW2KIhXVIj6cJRomDQ7-UOvq8yyC_J5r7ksfMCIBG0VgIJSIlNe49rl6ty6WA_DuH2AhKJvLOpq8fUBojv\"\r\n[dashsegments] Total fragments: 50\r\n[download] Destination: Sweet Candy \u2461-V_h3Z40AAtw.f248.webm\r\n[download] Got server HTTP error: HTTP Error 404: Not Found. Retrying fragment 1 (attempt 1 of 10) ...\r\n^C[debug] unlocking youtube_V_h3Z40AAtw.lock\r\n\r\nERROR: Interrupted by user\r\n```\r\n\r\n\r\n\r\n## Description\r\n[link to video](https://youtu.be/V_h3Z40AAtw)\r\n\r\nThe video itself plays on browser, and doesn't have 1080p as you can see it.\r\n\r\nBut yt-dlp (and youtube-dl) reports 1080p format, which possibly doesn't exist on the server. (format `248` on the video fails to download all segments.)\r\n\r\nResolutions shown in webpage here:\r\n![image](https://user-images.githubusercontent.com/10355528/125551541-4e14ac3a-0f66-40e2-9931-0a4f25e04750.png)\r\n\r\n__Edit:__ Tested web and android clients, some locations (JP, Vultr JP, OCI US?), with cookies or not, but all of them has this \"ghosty\" format", "pr_html_url": "https://github.com/yt-dlp/yt-dlp/pull/536", "file_loc": "{'base_commit': 'c84aeac6b5695e7e1ac629d17fc51eb68ab91bae', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1340, 1341]}}}, {'path': 'yt_dlp/downloader/youtube_live_chat.py', 'status': 'modified', 'Loc': {\"('YoutubeLiveChatFD', 'download_and_parse_fragment', 111)\": {'mod': [119]}, \"('YoutubeLiveChatFD', 'real_download', 22)\": {'mod': [149, 158, 186]}}}, {'path': 'yt_dlp/extractor/youtube.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [35, 39, 42], 'mod': [31, 34, 38]}, \"('YoutubeBaseInfoExtractor', None, 68)\": {'add': [394, 404], 'mod': [423, 424, 486, 522, 530, 531, 532]}, \"('YoutubeIE', None, 756)\": {'add': [1617, 1659], 'mod': [1125, 1290, 1297, 1298, 1299, 1655, 1661, 1862, 1863, 1864, 1865, 2290, 2291, 2292, 2294, 2296, 2297, 2298, 2299, 2301, 2302, 2303, 2304, 2306, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 2314, 2315]}, \"('YoutubeIE', '_extract_player_url', 1693)\": {'add': [1698], 'mod': [1695]}, \"('YoutubeIE', '_get_video_info_params', 2271)\": {'add': [2279]}, \"('YoutubeIE', '_real_extract', 2290)\": {'add': [2574, 2600, 2611, 2642, 2829], 'mod': [2317, 2318, 2320, 2321, 2322, 2323, 2324, 2325, 2326, 2327, 2329, 2330, 2331, 2332, 2333, 2334, 2335, 2336, 2337, 2339, 2340, 2341, 2342, 2343, 2345, 2346, 2347, 2348, 2349, 2350, 2352, 2353, 2354, 2355, 2356, 2357, 2358, 2359, 2360, 2361, 2362, 2363, 2364, 2365, 2367, 2368, 2369, 2370, 2371, 2372, 2373, 2374, 2376, 2377, 2378, 2379, 2380, 2381, 2382, 2383, 2384, 2385, 2386, 2387, 2388, 2389, 2390, 2391, 2392, 2393, 2394, 2395, 2396, 2397, 2398, 2400, 2401, 2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411, 2412, 2413, 2414, 2415, 2417, 2418, 2419, 2420, 2421, 2422, 2423, 2424, 2425, 2426, 2427, 2429, 2430, 2432, 2433, 2434, 2435, 2436, 2437, 2438, 2440, 2441, 2442, 2444, 2445, 2446, 2447, 2448, 2449, 2450, 2451, 2452, 2454, 2455, 2456, 2457, 2458, 2459, 2460, 2461, 2462, 2463, 2464, 2465, 2466, 2467, 2468, 2470, 2471, 2472, 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481, 2482, 2483, 2484, 2485, 2486, 2487, 2488, 2489, 2490, 2491, 2492, 2493, 2494, 2496, 2498, 2507, 2508, 2509, 2510, 2511, 2557, 2588, 2591, 2594, 2603, 2619, 2622, 2625, 2626, 2627, 2628, 2629, 2630, 2632, 2634, 2639, 2645, 2663, 2664, 2665, 2666, 2667, 2668, 2669, 2670, 2671, 2672, 2673, 2676, 2677, 2678, 2679, 2680, 2681, 2682, 2683, 2684, 2685, 2686, 2687, 2688, 2689, 2690, 2691, 2692, 2728, 2730, 2734, 2737, 2738, 2740, 2742, 2749, 2750, 2753, 2754, 2755, 2832, 2946, 2947, 2979, 2980, 2981, 2989, 2990, 2993, 2994, 3007, 3009]}, \"('YoutubePlaylistIE', None, 4145)\": {'add': [4167, 4195], 'mod': [4190]}, \"('YoutubeSearchURLIE', None, 4379)\": {'add': [4387]}, \"('YoutubeBaseInfoExtractor', '_call_api', 470)\": {'mod': [476]}, \"('YoutubeBaseInfoExtractor', '_extract_identity_token', 493)\": {'mod': [494]}, \"('YoutubeBaseInfoExtractor', '_generate_api_headers', 530)\": {'mod': [535, 536, 541]}, \"('YoutubeIE', '_comment_entries', 2040)\": {'mod': [2125]}, \"('YoutubeTabIE', None, 3014)\": {'mod': [3290]}, \"('YoutubeTabIE', '_entries', 3639)\": {'mod': [3696]}, \"('YoutubeTabIE', '_extract_from_tabs', 3779)\": {'mod': [3846]}, \"('YoutubeTabIE', '_extract_mix_playlist', 3854)\": {'mod': [3856, 3857]}, \"('YoutubeTabIE', '_reload_with_unavailable_videos', 3950)\": {'mod': [3974, 3975]}, \"('YoutubeTabIE', '_extract_webpage', 3989)\": {'mod': [4002]}, \"('YoutubeSearchIE', '_get_n_results', 4367)\": {'mod': [4369]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["yt_dlp/extractor/youtube.py", "yt_dlp/downloader/youtube_live_chat.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "59ca996eb1b510cef7ae60a179c36ea7f353f71e", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/197", "iss_label": "", "title": "Face on angles >180 degrees not recognized on extraction", "body": "Hi guys thanks for the amazing work here!\r\n\r\nI have been following the landmark detection dialogue #187 and have tried both hog and cnn with both face-alignment and face_recognition. I got face-alignment with cnn working great, with pytorch on win10 now. However, I noticed that none of the above are able to reliably able to identify faces where the face is pointing downwards, for example with the forehead pointing from 6 to 9 o'clock.\r\n\r\nI think all these algorithms tend to look for eyes being above the level of the mouth.\r\n\r\nFor example [Image Removed) this image would not be detected and extracted by hog or cnn in face-alignment or face_recognition.\r\n\r\nHowever by rotating it 90 deg to the right, so that the forehead is pointing up, makes it extracted.\r\n\r\nWould it be possible to have an argument set to resend the image for alignment but rotated if it was not caught the first time? \r\n\r\nI am ok with python and novice with git but could maybe even give it a try if someone points me to where the frame is passed for extraction.\r\n\r\nThanks!", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/253", "file_loc": "{'base_commit': '59ca996eb1b510cef7ae60a179c36ea7f353f71e', 'files': [{'path': 'lib/cli.py', 'status': 'modified', 'Loc': {\"('DirectoryProcessor', 'get_faces_alignments', 140)\": {'add': [144]}, '(None, None, None)': {'mod': [9]}, \"('DirectoryProcessor', None, 29)\": {'mod': [157]}, \"('DirectoryProcessor', 'get_faces', 157)\": {'mod': [159]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {\"('DetectedFace', '__init__', 10)\": {'add': [11]}, \"(None, 'detect_faces', 3)\": {'mod': [3, 7]}, \"('DetectedFace', None, 9)\": {'mod': [10]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 32]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}, \"('ConvertImage', 'convert', 216)\": {'mod': [229, 230]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {\"('ExtractTrainingData', 'add_optional_arguments', 22)\": {'add': [68]}, \"('ExtractTrainingData', None, 12)\": {'add': [100]}, \"('ExtractTrainingData', 'handleImage', 101)\": {'add': [104, 117], 'mod': [102, 106, 107]}, '(None, None, None)': {'mod': [7]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/utils.py", "lib/faces_detect.py", "lib/cli.py", "scripts/convert.py", "scripts/extract.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/239", "iss_label": "", "title": "EOL Error when training", "body": "After pulling the latest commit today I am now getting the below error when trying to train.\r\n\r\n**Command**\r\npython faceswap.py train -A \"D:\\Fakes\\Data\\Dataset_A\\Faces\" -B \"D:\\Fakes\\Data\\Dataset_B\\Faces\" -m \"D:\\Fakes\\Model\" -p -s 100 -bs 80 -t LowMem\r\n\r\n**Error**\r\nTraceback (most recent call last):\r\n File \"faceswap.py\", line 12, in \r\n from scripts.convert import ConvertImage\r\n File \"D:\\Fakes\\faceswap\\scripts\\convert.py\", line 100\r\n help=\"Erosion kernel size. (Masked converter only). Positive values apply erosion which reduces the edge \\\r\n ^\r\nSyntaxError: EOL while scanning string literal\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/9438672b1cf80602fc93536670d9601d655377f5", "file_loc": "{'base_commit': '9438672b1cf80602fc93536670d9601d655377f5', 'files': [{'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scripts/convert.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9438672b1cf80602fc93536670d9601d655377f5", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/239", "iss_label": "", "title": "EOL Error when training", "body": "After pulling the latest commit today I am now getting the below error when trying to train.\r\n\r\n**Command**\r\npython faceswap.py train -A \"D:\\Fakes\\Data\\Dataset_A\\Faces\" -B \"D:\\Fakes\\Data\\Dataset_B\\Faces\" -m \"D:\\Fakes\\Model\" -p -s 100 -bs 80 -t LowMem\r\n\r\n**Error**\r\nTraceback (most recent call last):\r\n File \"faceswap.py\", line 12, in \r\n from scripts.convert import ConvertImage\r\n File \"D:\\Fakes\\faceswap\\scripts\\convert.py\", line 100\r\n help=\"Erosion kernel size. (Masked converter only). Positive values apply erosion which reduces the edge \\\r\n ^\r\nSyntaxError: EOL while scanning string literal\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/9438672b1cf80602fc93536670d9601d655377f5", "file_loc": "{'base_commit': '9438672b1cf80602fc93536670d9601d655377f5', 'files': [{'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scripts/convert.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "5f7a07c0c867abedbb3ebf135915eeee56add24b", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/9326", "iss_label": "", "title": "Issue with 'char_to_token()' function of DistilBertTokenizerFast ", "body": "## Environment info\r\n\r\n \r\n- `transformers` version: 4.0.1\r\n- Platform: Google Colab\r\n- Python version: 3.8\r\n- PyTorch version (GPU?):\r\n- Tensorflow version (GPU?): 2.4.0\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: NA\r\n\r\n### Who can help: **tokenizers: @mfuntowicz**\r\n\r\n## Information\r\n\r\nModel I am using DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') to tokenize Squad 2.0 train and validate dataset. \r\n\r\nThe problem arises when using below code snippet to add_token_positions (start and end position) as below from https://huggingface.co/transformers/custom_datasets.html:\r\n\r\n_def add_token_positions(encodings, answers):\r\n start_positions = []\r\n end_positions = []\r\n for i in range(len(answers)):\r\n start_positions.append(**encodings.char_to_token(i, answers[i]['answer_start'])**)\r\n end_positions.append(**encodings.char_to_token(i, answers[i]['answer_end'] - 1**))\r\n # if None, the answer passage has been truncated\r\n if start_positions[-1] is None:\r\n start_positions[-1] = tokenizer.model_max_length\r\n if end_positions[-1] is None:\r\n end_positions[-1] = tokenizer.model_max_length\r\n encodings.update({'start_positions': start_positions, 'end_positions': end_positions})\r\n\r\nadd_token_positions(train_encodings, train_answers)\r\nadd_token_positions(val_encodings, val_answers)_\r\n\r\n\r\n\r\n\r\nThe tasks I am working on is:\r\n*Training model on SQUaD 2.0 using code given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1. Follow the steps given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0 and then verify start and end position outcome using below code snippet in Expected behavior\r\n\r\n\r\n\r\n\r\n## Expected behavior:\r\n- Start and End position are being defined using above code snippet which will be provided as training/validation data to model but end position is not derived as correct value due to some issue with char_to_token() function which is used to find out end position.\r\n- Please find below snippet for verification that answer using start and end position after tokenization is not matching with actual answer.\r\n- So the training data which is being fed to model after tokenization is incorrect\r\n\r\nidx=8\r\nprint(f'Actual context: {train_contexts[idx]}')\r\nprint(f'Actual question: {train_questions[idx]}')\r\nprint(f\"Actual answer: {train_answers[idx]['text']}\")\r\n\r\nstart_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start'])\r\nend_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end'])\r\nprint(f\"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}\") \r\n\r\nOUTPUT:\r\n**Actual context:** Beyonc\u00e9 Giselle Knowles-Carter (/bi\u02d0\u02c8j\u0252nse\u026a/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father, Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release of Beyonc\u00e9's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles \"Crazy in Love\" and \"Baby Boy\".\r\n**Actual question:** When did Beyonc\u00e9 rise to fame?\r\n**Actual answer:** late 1990s\r\n**Answer after tokenization:** ['late', '1990s', 'as', 'lead', 'singer', 'of', 'r', '&', 'b', 'girl', '-', 'group', 'destiny', \"'\", 's', 'child', '.', 'managed', 'by', 'her', 'father', ',', 'mathew', 'knowles', ',', 'the', 'group', 'became', 'one', 'of', 'the', 'world', \"'\", 's', 'best', '-', 'selling', 'girl', 'groups', 'of', 'all', 'time', '.', 'their', 'hiatus', 'saw', 'the', 'release', 'of', 'beyonce', \"'\", 's', 'debut', 'album', ',', 'dangerously', 'in', 'love', '(', '2003', ')', ',', 'which', 'established', 'her', 'as', 'a', 'solo', 'artist', 'worldwide', ',', 'earned', 'five', 'grammy', 'awards', 'and', 'featured', 'the', 'billboard', 'hot', '100', 'number', '-', 'one', 'singles', '\"', 'crazy', 'in', 'love', '\"', 'and', '\"', 'baby', 'boy', '\"', '.', '[SEP]', 'when', 'did', 'beyonce', 'rise', 'to', 'fame', '?', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]']", "pr_html_url": "https://github.com/huggingface/transformers/pull/9378", "file_loc": "{'base_commit': '5f7a07c0c867abedbb3ebf135915eeee56add24b', 'files': [{'path': 'docs/source/custom_datasets.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [564], 'mod': [561, 562, 566]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": ["docs/source/custom_datasets.rst"], "test": [], "config": [], "asset": []}} {"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "2940f987c0996fe083d1777bdc117fc28c576c08", "iss_has_pr": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1007", "iss_label": "bug\nprimordial", "title": "running ingest throws attribute error module 'chromadb' has no attribute 'PersistentClient'", "body": "```\r\n(privategpt-py3.11) (base) \u279c privateGPT git:(main) \u2717 python ingest.py\r\nTraceback (most recent call last):\r\n File \"/Volumes/Projects/privateGPT/ingest.py\", line 169, in \r\n main()\r\n File \"/Volumes/Projects/privateGPT/ingest.py\", line 146, in main\r\n chroma_client = chromadb.PersistentClient(settings=CHROMA_SETTINGS , path=persist_directory)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\nAttributeError: module 'chromadb' has no attribute 'PersistentClient'\r\n\r\n```\r\n\r\n.env file:\r\n\r\n```\r\nPERSIST_DIRECTORY=db\r\nMODEL_TYPE=GPT4All\r\nMODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin\r\nEMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2\r\nMODEL_N_CTX=1000\r\nMODEL_N_BATCH=8\r\nTARGET_SOURCE_CHUNKS=4\r\n```\r\n\r\n**Environment (please complete the following information):**\r\n - OS / hardware: macOS 13.5.1\r\n - Python version 3.11.5\r\n\r\nany idea what's wrong here or how to solve it?", "pr_html_url": "https://github.com/zylon-ai/private-gpt/pull/1015", "file_loc": "{'base_commit': '2940f987c0996fe083d1777bdc117fc28c576c08', 'files': [{'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [11, 12, 13, 14, 15, 16, 18, 19, 23]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": [], "config": ["pyproject.toml"], "asset": []}} {"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "bbb9645f7c60c35177922d10ccc7ed4b90d261c3", "iss_has_pr": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/979", "iss_label": "", "title": "`utils.text.reduce_message_length` Not reducing text length", "body": "**Bug description**\r\nI come across this issue.\r\n```python\r\n File \"/Users/azure/Documents/Workspace/Datasci/lib/python3.10/site-packages/metagpt/utils/text.py\", line 31, in reduce_message_length\r\n raise RuntimeError(\"fail to reduce message length\")\r\nRuntimeError: fail to reduce message length\r\n```\r\n\r\n**Bug solved method**\r\nDigging into the code, I assume `utils.text.reduce_message_length()` only check if the token is short enough.\r\nIf it's too long, it simply raise exception, instead of shorten it.\r\nFollowing it the code in `utils.text.reduce_message_length()`\r\n```python\r\ndef reduce_message_length(\r\n msgs: Generator[str, None, None],\r\n model_name: str,\r\n system_text: str,\r\n reserved: int = 0,\r\n) -> str:\r\n max_token = TOKEN_MAX.get(model_name, 2048) - count_string_tokens(system_text, model_name) - reserved\r\n for msg in msgs:\r\n if count_string_tokens(msg, model_name) < max_token or model_name not in TOKEN_MAX:\r\n return msg\r\n\r\n raise RuntimeError(\"fail to reduce message length\")\r\n``` \r\n\r\n- LLM type and model name:\r\n- System version:MetaGPT 0.7.4\r\n- Python version: Python 3.10.13\r\n\r\nIs it a feature that is not implemented yet, or I can try to create a PR to fix it", "pr_html_url": "https://github.com/geekan/MetaGPT/pull/986", "file_loc": "{'base_commit': 'bbb9645f7c60c35177922d10ccc7ed4b90d261c3', 'files': [{'path': 'metagpt/actions/research.py', 'status': 'modified', 'Loc': {\"('CollectLinks', 'run', 94)\": {'mod': [137]}}}, {'path': 'metagpt/config2.py', 'status': 'modified', 'Loc': {\"('Config', 'default', 88)\": {'mod': [95]}}}, {'path': 'metagpt/utils/token_counter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [142, 158, 161], 'mod': [144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157]}, \"(None, 'count_message_tokens', 182)\": {'mod': [182, 212, 213]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["metagpt/actions/research.py", "metagpt/config2.py", "metagpt/utils/token_counter.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "c56bce482db698c7c7e7b583b8b2e08a211eb48b", "iss_has_pr": 1, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/10463", "iss_label": "API", "title": "Toward a consistent API for NearestNeighbors & co", "body": "### Estimators relying on `NearestNeighbors` (NN), and their related params:\r\n`params = (algorithm, leaf_size, metric, p, metric_params, n_jobs)`\r\n\r\n**sklearn.neighbors:**\r\n- `NearestNeighbors(n_neighbors, radius, *params)`\r\n- `KNeighborsClassifier(n_neighbors, *params)`\r\n- `KNeighborsRegressor(n_neighbors, *params)`\r\n- `RadiusNeighborsClassifier(radius, *params)`\r\n- `RadiusNeighborsRegressor(radius, *params)`\r\n- `LocalOutlierFactor(n_neighbors, *params)`\r\n- ~`KernelDensity(algorithm, metric, leaf_size, metric_params)`\r\n\r\n**sklearn.manifold:**\r\n- `TSNE(method=\"barnes_hut\", metric)`\r\n- `Isomap(n_neighbors, neighbors_algorithm, n_jobs)`\r\n- `LocallyLinearEmbedding(n_neighbors, neighbors_algorithm, n_jobs)`\r\n- `SpectralEmbedding(affinity='nearest_neighbors', n_neighbors, n_jobs)`\r\n\r\n**sklearn.cluster:**\r\n- `SpectralClustering(affinity='nearest_neighbors', n_neighbors, n_jobs)`\r\n- `DBSCAN(eps, *params)`\r\n\r\n### How do they call `NearestNeighbors` ?\r\n- Inherit from `NeighborsBase._fit`: NearestNeighbors, KNeighborsClassifier, KNeighborsRegressor, RadiusNeighborsClassifier, RadiusNeighborsRegressor, LocalOutlierFactor\r\n- Call `BallTree/KDTree(X)`: KernelDensity\r\n- Call `kneighbors_graph(X)`: SpectralClustering, SpectralEmbedding\r\n- Call `NearestNeighbors().fit(X)`: TSNE, DBSCAN, Isomap, kneighbors_graph\r\n\r\n### Do they handle other form of input X?\r\n- Handle precomputed distances matrix, with (metric/affinity='precomputed'): TSNE, DBSCAN, SpectralEmbedding, SpectralClustering\r\n- Handle `KNeighborsMixin` object: kneighbors_graph\r\n- Handle `NeighborsBase` object: all estimators inheriting NeighborsBase + UnsupervisedMixin\r\n- Handle `BallTree/KDTree` object: all estimators inheriting NeighborsBase + SupervisedFloatMixin/SupervisedIntegerMixin\r\n___\r\n### Issues:\r\n1. We don't have all NN parameters in all classes (e.g. `n_jobs` in TSNE).\r\n2. We can't give a custom NN estimators to these classes. (PR #3922 #8999)\r\n3. The handle of input X as a `NearestNeighbors/BallTree/KDTree` object is not consistent, and not well documented. Sometimes it is documented but does not work (e.g. Isomap), or not well documented but it does work (e.g. LocalOutlierFactor). Most classes almost handle it since `NearestNeighbors().fit(NearestNeighbors().fit(X))` works, but a call to `check_array(X)` prevents it (e.g. Isomap, DBSCAN, SpectralEmbedding, SpectralClustering, LocallyLinearEmbedding, TSNE).\r\n4. The handle of X as a precomputed distances matrix is not consistent, and sometimes does not work with sparse matrices (as given by `kneighbors_graph`) (e.g. TSNE #9691).\r\n\r\n### Proposed solutions:\r\n\r\nA. We could generalize the use of precomputed distances matrix, and use pipelines to chain `NearestNeighbors` with other estimators. Yet it might not be possible/efficient for some estimators. I this case one would have to adapt the estimators to allow for the following: `Estimator(neighbors='precomputed').fit(distance_matrix, y)`\r\n\r\nB. We could improve the checking of X to enable more widely having X as a `NearestNeighbors/BallTree/KDTree` fitted object. The changes would be probably limited, however, this solution is not pipeline-friendly.\r\n\r\nC. To be pipeline-friendly, a custom `NearestNeighbors` object could be passed in the params, unfitted. We could then put all NN-related parameters in this estimator parameter, and allow custom estimators with a clear API. This is essentially what is proposed in #8999.", "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/10482", "file_loc": "{'base_commit': 'c56bce482db698c7c7e7b583b8b2e08a211eb48b', 'files': [{'path': 'doc/glossary.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [699]}}}, {'path': 'doc/modules/classes.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1236, 1239]}}}, {'path': 'doc/modules/neighbors.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [511]}}}, {'path': 'doc/whats_new/v0.22.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [71, 316, 399]}}}, {'path': 'sklearn/cluster/dbscan_.py', 'status': 'modified', 'Loc': {\"(None, 'dbscan', 23)\": {'mod': [54, 55]}, \"('DBSCAN', None, 147)\": {'mod': [175, 176]}, \"('DBSCAN', 'fit', 284)\": {'mod': [322, 323, 331, 332, 333, 334, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347]}}}, {'path': 'sklearn/cluster/spectral.py', 'status': 'modified', 'Loc': {\"('SpectralClustering', 'fit', 448)\": {'add': [481], 'mod': [471]}, '(None, None, None)': {'mod': [16]}, \"('SpectralClustering', None, 275)\": {'mod': [329, 330, 331, 332]}, \"('SpectralClustering', '_pairwise', 532)\": {'mod': [533]}}}, {'path': 'sklearn/cluster/tests/test_dbscan.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [97]}}}, {'path': 'sklearn/cluster/tests/test_spectral.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19, 104]}}}, {'path': 'sklearn/manifold/_utils.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [16, 17, 23, 27, 28, 30, 31, 32, 33, 49, 64, 65, 67, 68, 88, 97]}}}, {'path': 'sklearn/manifold/isomap.py', 'status': 'modified', 'Loc': {\"('Isomap', None, 15)\": {'add': [66, 140], 'mod': [61, 76, 77]}, \"('Isomap', '__init__', 105)\": {'add': [115], 'mod': [107]}, \"('Isomap', '_fit_transform', 117)\": {'add': [120, 130], 'mod': [118, 123]}, '(None, None, None)': {'mod': [9]}, \"('Isomap', 'fit', 165)\": {'mod': [170, 172]}, \"('Isomap', 'fit_transform', 184)\": {'mod': [189]}, \"('Isomap', 'transform', 202)\": {'mod': [215, 219, 221, 224, 225, 228, 229]}}}, {'path': 'sklearn/manifold/locally_linear.py', 'status': 'modified', 'Loc': {\"(None, 'barycenter_kneighbors_graph', 67)\": {'mod': [102]}}}, {'path': 'sklearn/manifold/spectral_embedding_.py', 'status': 'modified', 'Loc': {\"('SpectralEmbedding', '_get_affinity_matrix', 458)\": {'add': [479]}, '(None, None, None)': {'mod': [22]}, \"(None, 'spectral_embedding', 135)\": {'mod': [160]}, \"('SpectralEmbedding', None, 353)\": {'mod': [372, 373, 374]}, \"('SpectralEmbedding', '_pairwise', 455)\": {'mod': [456]}, \"('SpectralEmbedding', 'fit', 505)\": {'mod': [510, 515, 525, 529, 530]}, \"('SpectralEmbedding', 'fit_transform', 545)\": {'mod': [550, 555]}}}, {'path': 'sklearn/manifold/t_sne.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21], 'mod': [14, 17]}, \"('TSNE', '_fit', 640)\": {'add': [666], 'mod': [641, 643, 644, 645, 646, 648, 649, 650, 651, 652, 653, 654, 655, 656, 658, 659, 660, 661, 662, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 733, 737, 740, 743, 753, 754, 757, 758, 769, 772, 773]}, \"(None, '_joint_probabilities', 31)\": {'mod': [56]}, \"(None, '_joint_probabilities_nn', 63)\": {'mod': [63, 73, 74, 76, 77, 93, 94, 95, 97, 102, 103]}, \"('TSNE', 'fit_transform', 864)\": {'mod': [872]}, \"('TSNE', 'fit', 885)\": {'mod': [894]}}}, {'path': 'sklearn/manifold/tests/test_isomap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 116]}}}, {'path': 'sklearn/manifold/tests/test_spectral_embedding.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}, \"(None, 'test_spectral_embedding_precomputed_affinity', 128)\": {'mod': [128, 136, 137]}, \"(None, 'test_spectral_embedding_callable_affinity', 143)\": {'mod': [143, 155, 156]}}}, {'path': 'sklearn/manifold/tests/test_t_sne.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 321], 'mod': [9]}, \"(None, 'test_binary_search', 104)\": {'mod': [107, 108, 109, 110, 112, 113]}, \"(None, 'test_binary_search_neighbors', 120)\": {'mod': [127, 128, 129, 130, 131, 132, 135, 136, 137, 138, 139, 140, 141, 142, 143, 145, 146, 148, 149, 150, 151, 152, 153, 154]}, \"(None, 'test_binary_perplexity_stability', 162)\": {'mod': [166, 169, 170, 171, 172, 174, 175, 177, 178, 179]}, \"(None, 'test_fit_csr_matrix', 265)\": {'mod': [265, 272]}, \"(None, 'test_non_square_precomputed_distances', 316)\": {'mod': [316, 317, 319, 320]}, \"(None, 'test_non_positive_precomputed_distances', 323)\": {'mod': [323, 324, 325, 326, 327, 328, 329]}, \"(None, 'test_no_sparse_on_barnes_hut', 566)\": {'mod': [566, 567, 568, 569, 570, 571, 572, 573, 574]}, \"(None, 'test_barnes_hut_angle', 609)\": {'mod': [619, 620, 621, 622, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637]}}}, {'path': 'sklearn/neighbors/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 23, 27]}}}, {'path': 'sklearn/neighbors/base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [105], 'mod': [29]}, \"('NeighborsBase', '_fit', 164)\": {'add': [194, 200, 206, 235, 239], 'mod': [209]}, \"('KNeighborsMixin', 'kneighbors', 339)\": {'add': [429, 483], 'mod': [345, 346, 360, 364, 409, 417, 418, 422, 424, 425, 428, 435, 436, 438, 459, 467, 468, 469, 470, 471, 474, 480, 482, 494, 497, 498, 499]}, \"('KNeighborsMixin', 'kneighbors_graph', 502)\": {'add': [564, 575], 'mod': [508, 509, 525, 550, 551, 552, 553, 554, 555, 557, 558, 559, 563, 577]}, \"('RadiusNeighborsMixin', 'radius_neighbors_graph', 787)\": {'add': [808], 'mod': [795, 811, 832, 833, 835, 846, 853, 862]}, \"(None, '_tree_query_parallel_helper', 292)\": {'mod': [292, 298]}, \"(None, '_tree_query_radius_parallel_helper', 582)\": {'mod': [582, 588]}, \"('RadiusNeighborsMixin', None, 591)\": {'mod': [628, 787]}, \"('RadiusNeighborsMixin', 'radius_neighbors', 628)\": {'mod': [650, 654, 659, 698, 706, 718, 723, 724, 727, 728, 729, 732, 734, 753, 754, 758, 759, 761, 772, 781, 784]}}}, {'path': 'sklearn/neighbors/classification.py', 'status': 'modified', 'Loc': {\"('KNeighborsClassifier', None, 26)\": {'add': [76]}, \"('RadiusNeighborsClassifier', None, 252)\": {'add': [305]}, \"('KNeighborsClassifier', 'predict', 155)\": {'mod': [160, 161, 166, 179, 182]}, \"('KNeighborsClassifier', 'predict_proba', 197)\": {'mod': [202, 203, 208, 223, 233]}, \"('RadiusNeighborsClassifier', 'predict', 446)\": {'mod': [451, 452, 457, 469, 470, 471]}, \"('RadiusNeighborsClassifier', 'predict_proba', 489)\": {'mod': [494, 495, 500, 507, 510, 538]}}}, {'path': 'sklearn/neighbors/graph.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 7, 8]}, \"(None, 'radius_neighbors_graph', 108)\": {'add': [184], 'mod': [146, 148, 149, 159, 183]}, \"(None, '_query_include_self', 24)\": {'mod': [24, 26, 27, 28, 29, 31]}, \"(None, 'kneighbors_graph', 34)\": {'mod': [68, 70, 71, 81, 104]}}}, {'path': 'sklearn/neighbors/lof.py', 'status': 'modified', 'Loc': {\"('LocalOutlierFactor', None, 19)\": {'mod': [63, 64, 121]}, \"('LocalOutlierFactor', 'fit', 219)\": {'mod': [242, 250, 251]}, \"('LocalOutlierFactor', '_predict', 299)\": {'mod': [323]}, \"('LocalOutlierFactor', '_local_reachability_density', 470)\": {'mod': [478, 482, 488]}}}, {'path': 'sklearn/neighbors/regression.py', 'status': 'modified', 'Loc': {\"('KNeighborsRegressor', None, 24)\": {'add': [80]}, \"('RadiusNeighborsRegressor', None, 194)\": {'add': [251]}, '(None, None, None)': {'mod': [16]}, \"('KNeighborsRegressor', 'predict', 149)\": {'mod': [154, 155, 160, 163, 164, 165, 166, 167]}, \"('RadiusNeighborsRegressor', 'predict', 313)\": {'mod': [318, 319, 324]}}}, {'path': 'sklearn/neighbors/tests/test_neighbors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 10, 11, 16, 190, 823], 'mod': [7]}, \"(None, 'test_k_and_radius_neighbors_duplicates', 1297)\": {'add': [1320]}, \"(None, 'test_radius_neighbors_predict_proba', 1485)\": {'add': [1500]}, \"(None, 'test_precomputed', 136)\": {'mod': [136, 139, 142, 143, 144, 178, 179, 180, 181, 182]}, \"(None, 'test_kneighbors_regressor_sparse', 824)\": {'mod': [849, 850, 851, 852]}}}, {'path': 'sklearn/neighbors/unsupervised.py', 'status': 'modified', 'Loc': {\"('NearestNeighbors', None, 9)\": {'mod': [43, 44, 46, 47, 48, 49, 50, 52, 54, 56, 57, 59, 60, 61, 62, 63, 65, 66]}}}, {'path': 'sklearn/utils/estimator_checks.py', 'status': 'modified', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["sklearn/neighbors/unsupervised.py", "sklearn/neighbors/regression.py", "sklearn/manifold/t_sne.py", "sklearn/neighbors/__init__.py", "sklearn/neighbors/base.py", "sklearn/manifold/locally_linear.py", "sklearn/manifold/_utils.pyx", "sklearn/cluster/dbscan_.py", "sklearn/manifold/spectral_embedding_.py", "sklearn/cluster/spectral.py", "sklearn/manifold/isomap.py", "sklearn/neighbors/lof.py", "sklearn/neighbors/classification.py", "sklearn/neighbors/graph.py", "sklearn/utils/estimator_checks.py"], "doc": ["doc/modules/neighbors.rst", "doc/glossary.rst", "doc/modules/classes.rst", "doc/whats_new/v0.22.rst"], "test": ["sklearn/manifold/tests/test_spectral_embedding.py", "sklearn/neighbors/tests/test_neighbors.py", "sklearn/cluster/tests/test_spectral.py", "sklearn/cluster/tests/test_dbscan.py", "sklearn/manifold/tests/test_isomap.py", "sklearn/manifold/tests/test_t_sne.py"], "config": [], "asset": []}} {"organization": "python", "repo_name": "cpython", "base_commit": "55d50d147c953fab37b273bca9ab010f40e067d3", "iss_has_pr": 1, "iss_html_url": "https://github.com/python/cpython/issues/102500", "iss_label": "type-feature\ntopic-typing\n3.12", "title": "Implement PEP 688: Making the buffer protocol accessible in Python", "body": "PEP-688 has just been accepted. I will use this issue to track its implementation in CPython.\r\n\r\n\r\n\r\n\r\n### Linked PRs\r\n* gh-102521\r\n* gh-102571\r\n* gh-104174\r\n* gh-104281\r\n* gh-104288\r\n* gh-104317\r\n\r\n", "pr_html_url": "https://github.com/python/cpython/pull/102521", "file_loc": "{'base_commit': '55d50d147c953fab37b273bca9ab010f40e067d3', 'files': [{'path': 'Include/internal/pycore_global_objects_fini_generated.h', 'status': 'modified', 'Loc': {\"(None, '_PyStaticObjects_CheckRefcnt', 24)\": {'add': [595, 694, 1124]}}}, {'path': 'Include/internal/pycore_global_strings.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [83, 182, 612]}}}, {'path': 'Include/internal/pycore_runtime_init_generated.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [589, 688, 1118]}}}, {'path': 'Include/internal/pycore_typeobject.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [140]}}}, {'path': 'Include/internal/pycore_unicodeobject_generated.h', 'status': 'modified', 'Loc': {\"(None, '_PyUnicode_InitStaticStrings', 12)\": {'add': [98, 395, 1685]}}}, {'path': 'Include/pybuffer.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [107]}}}, {'path': 'Lib/_collections_abc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [441], 'mod': [52]}}}, {'path': 'Lib/inspect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45, 3314]}}}, {'path': 'Lib/test/test_buffer.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19, 4440]}}}, {'path': 'Lib/test/test_collections.py', 'status': 'modified', 'Loc': {\"('TestCollectionABCs', None, 1416)\": {'add': [1951]}, '(None, None, None)': {'mod': [28]}}}, {'path': 'Lib/test/test_doctest.py', 'status': 'modified', 'Loc': {\"('test_DocTestFinder', 'non_Python_modules', 700)\": {'mod': [710]}}}, {'path': 'Modules/Setup.stdlib.in', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [172]}}}, {'path': 'Modules/_testcapi/parts.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [40]}}}, {'path': 'Modules/_testcapimodule.c', 'status': 'modified', 'Loc': {\"(None, 'PyInit__testcapi', 4162)\": {'add': [4312]}}}, {'path': 'Objects/clinic/memoryobject.c.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [64], 'mod': [359]}}}, {'path': 'Objects/memoryobject.c', 'status': 'modified', 'Loc': {'(None, None, 783)': {'add': [807], 'mod': [795]}, '(None, None, None)': {'add': [970, 3186], 'mod': [780]}, \"(None, '_PyManagedBuffer_FromObject', 88)\": {'mod': [88]}, '(None, None, 87)': {'mod': [96]}, \"(None, 'PyMemoryView_FromObject', 784)\": {'mod': [784]}, '(None, None, 838)': {'mod': [854]}}}, {'path': 'Objects/object.c', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16, 2075]}}}, {'path': 'Objects/typeobject.c', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 8061, 8897, 8964, 8983, 9064]}, '(None, None, 9203)': {'mod': [9211, 9212]}}}, {'path': 'PCbuild/_testcapi.vcxproj', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [112]}}}, {'path': 'PCbuild/_testcapi.vcxproj.filters', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [62]}}}, {'path': 'Tools/build/generate_global_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [123]}}}, {'path': 'Tools/c-analyzer/cpython/globals-to-fix.tsv', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [88]}}}, {'path': 'Tools/c-analyzer/cpython/ignored.tsv', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [406]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["Objects/typeobject.c", "Lib/inspect.py", "Tools/build/generate_global_objects.py", "Include/internal/pycore_global_objects_fini_generated.h", "Tools/c-analyzer/cpython/ignored.tsv", "Objects/clinic/memoryobject.c.h", "Lib/_collections_abc.py", "Tools/c-analyzer/cpython/globals-to-fix.tsv", "Include/internal/pycore_typeobject.h", "Include/internal/pycore_runtime_init_generated.h", "Modules/_testcapimodule.c", "Objects/memoryobject.c", "Modules/_testcapi/parts.h", "Include/pybuffer.h", "Include/internal/pycore_global_strings.h", "Include/internal/pycore_unicodeobject_generated.h", "Objects/object.c"], "doc": [], "test": ["Lib/test/test_doctest.py", "Lib/test/test_buffer.py", "Lib/test/test_collections.py"], "config": [], "asset": ["PCbuild/_testcapi.vcxproj.filters", "PCbuild/_testcapi.vcxproj", "Modules/Setup.stdlib.in"]}} -{"organization": "scrapy", "repo_name": "scrapy", "base_commit": "fd55f62207bbbb18d7758c8e2ef46fe9115eb2c5", "is_iss": 0, "iss_html_url": "https://github.com/scrapy/scrapy/issues/5400", "iss_label": "bug\nCI", "title": "Tests broken with Twisted 22.1.0", "body": "`ImportError: cannot import name 'PayloadResource' from 'twisted.web.test.test_webclient'`\r\n\r\n`ImportError: cannot import name 'ForeverTakingResource' from 'twisted.web.test.test_webclient'`", "code": null, "pr_html_url": "https://github.com/scrapy/scrapy/pull/5405", "commit_html_url": null, "file_loc": "{'base_commit': 'fd55f62207bbbb18d7758c8e2ef46fe9115eb2c5', 'files': [{'path': 'pytest.ini', 'status': 'modified', 'Loc': {'(None, None, 24)': {'mod': [24, 25]}}}, {'path': 'tests/mockserver.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [17, 20]}, \"('LeafResource', None, 38)\": {'mod': [38]}, \"('Root', None, 178)\": {'mod': [178]}, \"('Root', '__init__', 180)\": {'mod': [181, 190]}}}, {'path': 'tests/test_downloader_handlers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [18, 19, 37]}}}, {'path': 'tests/test_webclient.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24, 25, 26, 27, 28, 29, 30, 31, 39]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, 22)': {'mod': [22, 23]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/mockserver.py"], "doc": [], "test": ["tests/test_webclient.py", "tests/test_downloader_handlers.py"], "config": ["pytest.ini", "tox.ini"], "asset": []}} +{"organization": "scrapy", "repo_name": "scrapy", "base_commit": "fd55f62207bbbb18d7758c8e2ef46fe9115eb2c5", "iss_html_url": "https://github.com/scrapy/scrapy/issues/5400", "iss_label": "bug\nCI", "title": "Tests broken with Twisted 22.1.0", "body": "`ImportError: cannot import name 'PayloadResource' from 'twisted.web.test.test_webclient'`\r\n\r\n`ImportError: cannot import name 'ForeverTakingResource' from 'twisted.web.test.test_webclient'`", "code": null, "pr_html_url": "https://github.com/scrapy/scrapy/pull/5405", "commit_html_url": null, "file_loc": "{'base_commit': 'fd55f62207bbbb18d7758c8e2ef46fe9115eb2c5', 'files': [{'path': 'pytest.ini', 'status': 'modified', 'Loc': {'(None, None, 24)': {'mod': [24, 25]}}}, {'path': 'tests/mockserver.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [17, 20]}, \"('LeafResource', None, 38)\": {'mod': [38]}, \"('Root', None, 178)\": {'mod': [178]}, \"('Root', '__init__', 180)\": {'mod': [181, 190]}}}, {'path': 'tests/test_downloader_handlers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [18, 19, 37]}}}, {'path': 'tests/test_webclient.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24, 25, 26, 27, 28, 29, 30, 31, 39]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, 22)': {'mod': [22, 23]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tests/mockserver.py"], "doc": [], "test": ["tests/test_webclient.py", "tests/test_downloader_handlers.py"], "config": ["pytest.ini", "tox.ini"], "asset": []}} {"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "04081f810270712ba3a69577c47e5dcfa850fa90", "iss_has_pr": 1, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1355", "iss_label": "bug", "title": "The exported label txt seems have problem", "body": "Hi, @glenn-jocher i manage to use `python detect.py --save-txt` to semi-auto label images, but when i set `Open Dir` and `Change Save Dir` in [labelImg](https://github.com/tzutalin/labelImg/releases/tag/v1.8.1)\uff0cthe labelImg can not display the exported bbox, and its command line window appears error:\r\n```\r\nTraceback (most recent call last):\r\n File \"\", line 1268, in openNextImg\r\n File \"\", line 1035, in loadFile\r\n File \"\", line 1427, in loadYOLOTXTByFilename\r\n File \"Z:\\home\\darrenl\\tmp\\labelImg\\build-tools\\build\\labelImg\\out00-PYZ.pyz\\libs.yolo_io\", line 112, in __init__\r\n File \"Z:\\home\\darrenl\\tmp\\labelImg\\build-tools\\build\\labelImg\\out00-PYZ.pyz\\libs.yolo_io\", line 142, in parseYoloFormat\r\nValueError: too many values to unpack\r\n```\r\nIf i set `Change Save Dir` to another empty folder, it will not occur error, so i doubt it is the problem of exported label txt, could you have a try ?", "pr_html_url": "https://github.com/ultralytics/yolov5/pull/1377", "file_loc": "{'base_commit': '04081f810270712ba3a69577c47e5dcfa850fa90', 'files': [{'path': '.github/workflows/ci-testing.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [69, 72]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [99]}}}, {'path': 'detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [164], 'mod': [13, 159, 160]}, \"(None, 'detect', 17)\": {'mod': [18, 19, 24, 25, 26, 27]}}}, {'path': 'test.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [16, 282, 290, 291, 308]}, \"(None, 'test', 20)\": {'mod': [49, 50, 51, 52]}}}, {'path': 'train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [417], 'mod': [30, 413, 414, 431, 433, 443, 469, 470, 517]}, \"(None, 'train', 37)\": {'mod': [39, 40, 41, 44, 45, 46, 49, 51, 123, 124, 191, 218, 299, 324, 372, 381]}}}, {'path': 'tutorial.ipynb', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [600, 614, 890, 972, 989, 990, 1033, 1042, 1043, 1044, 1081, 1173, 1175]}}}, {'path': 'utils/general.py', 'status': 'modified', 'Loc': {\"(None, 'get_latest_run', 63)\": {'mod': [63]}, \"(None, 'increment_dir', 954)\": {'mod': [954, 955, 956, 957, 958, 959, 960, 962, 964, 965, 966, 967, 968, 969, 970]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["tutorial.ipynb", "utils/general.py", "detect.py", "train.py"], "doc": ["README.md"], "test": ["test.py"], "config": [".github/workflows/ci-testing.yml"], "asset": []}} {"organization": "psf", "repo_name": "requests", "base_commit": "be62645dd56580dd7576032b348cf79d880851d8", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1088", "iss_label": "Feature Request", "title": "Session pickling support is broken and tests for it are removed", "body": "The commit 42b029552190f6639642d0f62d27abcd1ceed51e removes the `__attrs__` attribute of the `Session` class, which is used in the pickle protocol's `__getstate__` method.\n\nThe tests that are testing this functionality (functions `test_session_pickling` and `test_unpickled_session_requests` in the once present `tests/test_requests.py`) are also removed.\n\nThe commit messages don't seem to indicate any reason for this, and I can't find anything searching in the issues.\n\nIf it is intended that pickling of Session objects not be supported, could you give the reason? And may be the `__getstate__` and `__setstate__` methods should be removed too, as they might send a wrong message.\n\nIf this is unintended (which is what I think is the case), I can work on a pull request to fix this. Please confirm.\n\nThank you.\n", "pr_html_url": "https://github.com/psf/requests/pull/1223", "file_loc": "{'base_commit': 'be62645dd56580dd7576032b348cf79d880851d8', 'files': [{'path': 'requests/sessions.py', 'status': 'modified', 'Loc': {\"('Session', None, 166)\": {'add': [178]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/sessions.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "34261a15835390c5c464cef88c4a42b52a88b739", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/987", "iss_label": "", "title": "Massage about Pinecone initializing", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Summary \ud83d\udca1\n\nAdd a message like: \"Connecting Pinecone. This may take some time...\"\n\n### Examples \ud83c\udf08\n\n_No response_\n\n### Motivation \ud83d\udd26\n\nAt this point, if the Pinecone index setup takes a noticeable amount of time, the console just stops. It is necessary to notify the user that the index is being configured now and this may take some time.", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/1194", "file_loc": "{'base_commit': '34261a15835390c5c464cef88c4a42b52a88b739', 'files': [{'path': 'autogpt/memory/pinecone.py', 'status': 'modified', 'Loc': {\"('PineconeMemory', '__init__', 10)\": {'add': [40]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["autogpt/memory/pinecone.py"], "doc": [], "test": [], "config": [], "asset": []}} @@ -219,12 +219,12 @@ {"organization": "keras-team", "repo_name": "keras", "base_commit": "8f5592bcb61ff48c96560c8923e482db1076b54a", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/20324", "iss_label": "type:support\nkeras-team-review-pending", "title": "Reason for the recently added shape restriction in MultiHeadAttention", "body": "Hello,\r\n\r\nWondering why is there a restriction on the input shape of `query` and `value` to have a matching final dimension?\r\n\r\nThis blocks having cross-attention to a source that has a different shape than query, unless adding an extra projection layer. Given that all input tensors (`query`, `key`, `value`) are immediately projected by dense layers inside `MultiHeadAttention`, I don't think any restriction on final dims is necessary.\r\n\r\nFor reference, the [pytorch doc](https://keras.io/api/layers/attention_layers/multi_head_attention/) for `MultiHeadAttention` explicitly uses 3 distinct variables to describe expected dimensions for the three tensors. The tensorflow implementation does not enforce such restriction as well.\r\n\r\nThe restriction is enforced here: https://github.com/keras-team/keras/blob/5aa5f88dc200bbf2cd765d5a213c23c58da48e80/keras/src/layers/attention/multi_head_attention.py#L214-L219\r\n\r\nAnd was added as part of the PR #19973 (in response to the issue #19769)\r\n\r\nThanks", "pr_html_url": "https://github.com/keras-team/keras/pull/20340", "file_loc": "{'base_commit': '8f5592bcb61ff48c96560c8923e482db1076b54a', 'files': [{'path': 'keras/src/layers/attention/multi_head_attention.py', 'status': 'modified', 'Loc': {\"('MultiHeadAttention', 'build', 199)\": {'mod': [214, 215, 216, 217, 218, 219]}, \"('MultiHeadAttention', 'compute_output_shape', 598)\": {'mod': [607, 608, 609, 610, 611, 612]}}}, {'path': 'keras/src/layers/attention/multi_head_attention_test.py', 'status': 'modified', 'Loc': {\"('MultiHeadAttentionTest', None, 17)\": {'add': [106], 'mod': [133]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["keras/src/layers/attention/multi_head_attention_test.py", "keras/src/layers/attention/multi_head_attention.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "ansible", "repo_name": "ansible", "base_commit": "2897cf43cea3d61b9673ce14ba796a663d99f19d", "iss_has_pr": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/56571", "iss_label": "python3\nsupport:community\nbug\nhas_pr\naffects_2.7\ncollection\ncollection:community.general\nneeds_collection_redirect\nbot_closed", "title": "\"machinectl: invalid option -- 'c'\" when using become_method: machinectl", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\n\r\n`become_method: machinectl` fails with the error \"machinectl: invalid option -- 'c'\".\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\n`lib/ansible/plugins/become/machinectl.py`\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```paste below\r\nansible 2.7.10\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/thomas/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.7/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.7.3 (default, Mar 26 2019, 21:43:19) [GCC 8.2.1 20181127]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\nHost: Arch Linux x64\r\nTarget: Ubuntu 16.04.6 Desktop 64-bits\r\n\r\nTarget machinectl version:\r\n```\r\n$ machinectl --version\r\nsystemd 229\r\n+PAM +AUDIT +SELINUX +IMA +APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ -LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD -IDN\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\n\r\n\r\n```yaml\r\n$ ansible -m ping --user thomas --become --become-user somebody --become-method machinectl target-machine\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\nNo error, `machinectl` should just work. (I need to use `machinectl` because I want to create/start a systemd user service running as `somebody`, while `somebody` may not be logged in.)\r\n\r\n##### ACTUAL RESULTS\r\n\r\n\r\nWith `-vvvv` (lots of spammy ssh output): [gist](https://gist.github.com/ttencate/36b75976a3564b8cd59ce1562c906c89)\r\n\r\nFrom the output we can see that Ansible is running this command on the target:\r\n\r\n```\r\nmachinectl shell -q somebody@ /bin/sh -c '\"'\"'\"'\"'\"'\"'\"'\"'echo BECOME-SUCCESS-kppmktulvryalmvucprbybyfjgvsiseh; /usr/bin/python /var/tmp/ansible-tmp-1558089659.8620167-87402387160364/AnsiballZ_ping.py'\"'\"'\"'\"'\"'\"'\"'\"' && sleep 0\r\n```\r\n\r\nOn the Ubuntu target (systemd 229), that fails:\r\n\r\n```\r\n$ machinectl shell -q somebody@ /bin/sh -c 'echo foo'\r\nmachinectl: invalid option -- 'c'\r\n```\r\n\r\nOn the Arch Linux host (systemd 242), it succeeds:\r\n\r\n```\r\n$ machinectl shell -q thomas@ /bin/sh -c 'echo foo'\r\n[...]\r\nfoo\r\n```\r\n\r\nThe cause seems to be [systemd issue #2420](https://github.com/systemd/systemd/issues/2420), which presumably was fixed just too late to make it into the Ubuntu 16.04 release. A simple workaround is to add `--` before the actual command, which terminates the option list and works on both old and new [edit 2019-07-05: no it doesn't, see below!]:\r\n\r\n```\r\n$ machinectl shell -q somebody@ -- /bin/sh -c 'echo foo'\r\n[...]\r\nfoo\r\n```", "pr_html_url": "https://github.com/ansible/ansible/pull/56572", "file_loc": "{'base_commit': '2897cf43cea3d61b9673ce14ba796a663d99f19d', 'files': [{'path': 'lib/ansible/plugins/become/machinectl.py', 'status': 'modified', 'Loc': {\"('BecomeModule', 'build_become_command', 78)\": {'mod': [87]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/ansible/plugins/become/machinectl.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "b5a5268dabb2a4dea1c3c543a1ddff501b87a447", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16870", "iss_label": "Docs\nGroupby\ngood first issue", "title": "(DOC) A `string` passed to `groupby` is hard to understand based on current doc", "body": "#### Code Sample, a copy-pastable example if possible\r\nFrom [Here](pandas/doc/source/groupby.rst)\r\n```rst\r\nFor DataFrame objects, a string indicating a column to be used to group. Of course \r\ndf.groupby('A') is just syntactic sugar for df.groupby(df['A']), but \r\nit makes life simpler\r\nFor DataFrame objects, a string indicating an index level to be used to group.\r\n\r\n```\r\n#### Problem description\r\n\r\nThese two sentences are in a kind of conflict with each other, until one read until she read the note below.\r\n#### Expected Output\r\nReword to make it clear that a string may indicate column or index level\r\n\r\n#### Output of ``pd.show_versions()``\r\n\r\n
\r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\npython: 3.5.3.final.0\r\npython-bits: 64\r\nOS: Darwin\r\nOS-release: 16.6.0\r\nmachine: x86_64\r\nprocessor: i386\r\nbyteorder: little\r\nLC_ALL: None\r\nLANG: en_US.UTF-8\r\nLOCALE: en_US.UTF-8\r\n\r\npandas: 0.21.0.dev+193.gb2b5dc32e\r\npytest: 3.1.2\r\npip: 9.0.1\r\nsetuptools: 36.0.1\r\nCython: 0.25.2\r\nnumpy: 1.13.0\r\nscipy: 0.19.0\r\nxarray: None\r\nIPython: 6.0.0\r\nsphinx: 1.6.2\r\npatsy: 0.4.1\r\ndateutil: 2.6.0\r\npytz: 2017.2\r\nblosc: None\r\nbottleneck: 1.2.1\r\ntables: None\r\nnumexpr: 2.6.2\r\nfeather: None\r\nmatplotlib: 2.0.2\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: 0.9999999\r\nsqlalchemy: None\r\npymysql: None\r\npsycopg2: None\r\njinja2: 2.9.6\r\ns3fs: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n
\r\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/36238", "file_loc": "{'base_commit': 'b5a5268dabb2a4dea1c3c543a1ddff501b87a447', 'files': [{'path': 'doc/source/user_guide/groupby.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [90, 91, 92, 93, 94]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": ["doc/source/user_guide/groupby.rst"], "test": [], "config": [], "asset": []}} -{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "48d0460ab9acbee223bae1be699344f8fd232224", "is_iss": 0, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/12401", "iss_label": "Indexing\nAPI Design\nDeprecate\nNeeds Discussion", "title": "DEPR: filter & select", "body": "do we need label selectors? we should for sure just have a single method for this. maybe call it `query_labels`? to be consistent with `.query` as the workhorse for data selection.\r\n\r\n- [x] ``.select`` (#17633)\r\n- [ ] ``.filter``\r\n\r\nxref #6599 \r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pandas-dev/pandas/commit/48d0460ab9acbee223bae1be699344f8fd232224", "file_loc": "{'base_commit': '48d0460ab9acbee223bae1be699344f8fd232224', 'files': [{'path': 'doc/source/whatsnew/v0.21.0.txt', 'status': 'modified', 'Loc': {'(None, None, 669)': {'add': [669]}}}, {'path': 'pandas/core/common.py', 'status': 'modified', 'Loc': {\"(None, '_apply_if_callable', 444)\": {'add': [447]}}}, {'path': 'pandas/core/generic.py', 'status': 'modified', 'Loc': {\"('NDFrame', 'select', 2338)\": {'add': [2341, 2351]}, \"('NDFrame', 'filter', 3061)\": {'mod': [3104, 3123, 3124, 3127, 3128, 3130, 3131, 3132, 3135, 3136]}}}, {'path': 'pandas/core/indexing.py', 'status': 'modified', 'Loc': {\"('_NDFrameIndexer', '__call__', 98)\": {'add': [101]}, \"('_NDFrameIndexer', '__getitem__', 110)\": {'add': [119], 'mod': [121, 123]}, \"('_NDFrameIndexer', None, 88)\": {'add': [198], 'mod': [110, 195]}, \"('_NDFrameIndexer', '_convert_tuple', 228)\": {'add': [235]}, \"('_NDFrameIndexer', '_getitem_iterable', 1110)\": {'add': [1155], 'mod': [1141]}, \"('_NDFrameIndexer', '_convert_to_indexer', 1167)\": {'add': [1260], 'mod': [1258]}, \"('_LocationIndexer', None, 1355)\": {'add': [1358], 'mod': [1357]}, \"('_iLocIndexer', '_getitem_tuple', 1735)\": {'add': [1744], 'mod': [1751]}, \"('_iLocIndexer', '_get_list_axis', 1778)\": {'add': [1785], 'mod': [1784]}, \"('_NDFrameIndexer', '_get_label', 129)\": {'mod': [138, 141]}, \"('_NDFrameIndexer', '_get_setitem_indexer', 157)\": {'mod': [176]}, \"('_NDFrameIndexer', '_multi_take_opportunity', 882)\": {'mod': [898]}, \"('_NDFrameIndexer', '_convert_for_reindex', 916)\": {'mod': [928]}, \"('_NDFrameIndexer', '_getitem_lowerdim', 963)\": {'mod': [1018]}, \"('_NDFrameIndexer', '_getitem_nested_tuple', 1024)\": {'mod': [1052]}, \"('_NDFrameIndexer', '_getitem_axis', 1072)\": {'mod': [1087]}, \"('_IXIndexer', '__init__', 1324)\": {'mod': [1328, 1336, 1337]}, \"('_IXIndexer', '_has_valid_type', 1338)\": {'mod': [1345, 1348]}, \"('_LocIndexer', '_is_scalar_access', 1518)\": {'mod': [1531]}, \"('_iLocIndexer', '_is_valid_list_like', 1716)\": {'mod': [1720, 1732]}, \"('_iLocIndexer', '_getitem_axis', 1799)\": {'mod': [1821]}}}, {'path': 'pandas/tests/frame/test_alter_axes.py', 'status': 'modified', 'Loc': {\"('TestDataFrameAlterAxes', 'test_set_index_bug', 143)\": {'add': [149], 'mod': [146, 147]}}}, {'path': 'pandas/tests/frame/test_axis_select_reindex.py', 'status': 'modified', 'Loc': {\"('TestDataFrameSelectReindex', None, 25)\": {'add': [798]}, \"('TestDataFrameSelectReindex', 'test_select', 798)\": {'add': [806], 'mod': [800, 801, 802, 803, 805, 808]}}}, {'path': 'pandas/tests/frame/test_mutate_columns.py', 'status': 'modified', 'Loc': {}}, {'path': 'pandas/tests/groupby/test_groupby.py', 'status': 'modified', 'Loc': {\"('TestGroupBy', '_func', 3105)\": {'mod': [3106]}}}, {'path': 'pandas/tests/series/test_indexing.py', 'status': 'modified', 'Loc': {\"('TestSeriesIndexing', 'test_select', 2227)\": {'mod': [2228, 2229, 2230, 2231, 2233, 2234, 2235]}}}, {'path': 'pandas/tests/test_multilevel.py', 'status': 'modified', 'Loc': {\"('TestMultiLevel', 'test_groupby_level_no_obs', 1236)\": {'mod': [1242]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/common.py", "pandas/core/generic.py", "pandas/core/indexing.py"], "doc": ["doc/source/whatsnew/v0.21.0.txt"], "test": ["pandas/tests/series/test_indexing.py", "pandas/tests/test_multilevel.py", "pandas/tests/frame/test_mutate_columns.py", "pandas/tests/groupby/test_groupby.py", "pandas/tests/frame/test_alter_axes.py", "pandas/tests/frame/test_axis_select_reindex.py"], "config": [], "asset": []}} +{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "48d0460ab9acbee223bae1be699344f8fd232224", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/12401", "iss_label": "Indexing\nAPI Design\nDeprecate\nNeeds Discussion", "title": "DEPR: filter & select", "body": "do we need label selectors? we should for sure just have a single method for this. maybe call it `query_labels`? to be consistent with `.query` as the workhorse for data selection.\r\n\r\n- [x] ``.select`` (#17633)\r\n- [ ] ``.filter``\r\n\r\nxref #6599 \r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pandas-dev/pandas/commit/48d0460ab9acbee223bae1be699344f8fd232224", "file_loc": "{'base_commit': '48d0460ab9acbee223bae1be699344f8fd232224', 'files': [{'path': 'doc/source/whatsnew/v0.21.0.txt', 'status': 'modified', 'Loc': {'(None, None, 669)': {'add': [669]}}}, {'path': 'pandas/core/common.py', 'status': 'modified', 'Loc': {\"(None, '_apply_if_callable', 444)\": {'add': [447]}}}, {'path': 'pandas/core/generic.py', 'status': 'modified', 'Loc': {\"('NDFrame', 'select', 2338)\": {'add': [2341, 2351]}, \"('NDFrame', 'filter', 3061)\": {'mod': [3104, 3123, 3124, 3127, 3128, 3130, 3131, 3132, 3135, 3136]}}}, {'path': 'pandas/core/indexing.py', 'status': 'modified', 'Loc': {\"('_NDFrameIndexer', '__call__', 98)\": {'add': [101]}, \"('_NDFrameIndexer', '__getitem__', 110)\": {'add': [119], 'mod': [121, 123]}, \"('_NDFrameIndexer', None, 88)\": {'add': [198], 'mod': [110, 195]}, \"('_NDFrameIndexer', '_convert_tuple', 228)\": {'add': [235]}, \"('_NDFrameIndexer', '_getitem_iterable', 1110)\": {'add': [1155], 'mod': [1141]}, \"('_NDFrameIndexer', '_convert_to_indexer', 1167)\": {'add': [1260], 'mod': [1258]}, \"('_LocationIndexer', None, 1355)\": {'add': [1358], 'mod': [1357]}, \"('_iLocIndexer', '_getitem_tuple', 1735)\": {'add': [1744], 'mod': [1751]}, \"('_iLocIndexer', '_get_list_axis', 1778)\": {'add': [1785], 'mod': [1784]}, \"('_NDFrameIndexer', '_get_label', 129)\": {'mod': [138, 141]}, \"('_NDFrameIndexer', '_get_setitem_indexer', 157)\": {'mod': [176]}, \"('_NDFrameIndexer', '_multi_take_opportunity', 882)\": {'mod': [898]}, \"('_NDFrameIndexer', '_convert_for_reindex', 916)\": {'mod': [928]}, \"('_NDFrameIndexer', '_getitem_lowerdim', 963)\": {'mod': [1018]}, \"('_NDFrameIndexer', '_getitem_nested_tuple', 1024)\": {'mod': [1052]}, \"('_NDFrameIndexer', '_getitem_axis', 1072)\": {'mod': [1087]}, \"('_IXIndexer', '__init__', 1324)\": {'mod': [1328, 1336, 1337]}, \"('_IXIndexer', '_has_valid_type', 1338)\": {'mod': [1345, 1348]}, \"('_LocIndexer', '_is_scalar_access', 1518)\": {'mod': [1531]}, \"('_iLocIndexer', '_is_valid_list_like', 1716)\": {'mod': [1720, 1732]}, \"('_iLocIndexer', '_getitem_axis', 1799)\": {'mod': [1821]}}}, {'path': 'pandas/tests/frame/test_alter_axes.py', 'status': 'modified', 'Loc': {\"('TestDataFrameAlterAxes', 'test_set_index_bug', 143)\": {'add': [149], 'mod': [146, 147]}}}, {'path': 'pandas/tests/frame/test_axis_select_reindex.py', 'status': 'modified', 'Loc': {\"('TestDataFrameSelectReindex', None, 25)\": {'add': [798]}, \"('TestDataFrameSelectReindex', 'test_select', 798)\": {'add': [806], 'mod': [800, 801, 802, 803, 805, 808]}}}, {'path': 'pandas/tests/frame/test_mutate_columns.py', 'status': 'modified', 'Loc': {}}, {'path': 'pandas/tests/groupby/test_groupby.py', 'status': 'modified', 'Loc': {\"('TestGroupBy', '_func', 3105)\": {'mod': [3106]}}}, {'path': 'pandas/tests/series/test_indexing.py', 'status': 'modified', 'Loc': {\"('TestSeriesIndexing', 'test_select', 2227)\": {'mod': [2228, 2229, 2230, 2231, 2233, 2234, 2235]}}}, {'path': 'pandas/tests/test_multilevel.py', 'status': 'modified', 'Loc': {\"('TestMultiLevel', 'test_groupby_level_no_obs', 1236)\": {'mod': [1242]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["pandas/core/common.py", "pandas/core/generic.py", "pandas/core/indexing.py"], "doc": ["doc/source/whatsnew/v0.21.0.txt"], "test": ["pandas/tests/series/test_indexing.py", "pandas/tests/test_multilevel.py", "pandas/tests/frame/test_mutate_columns.py", "pandas/tests/groupby/test_groupby.py", "pandas/tests/frame/test_alter_axes.py", "pandas/tests/frame/test_axis_select_reindex.py"], "config": [], "asset": []}} {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9dc151e5b58abb5f8862d2aa84124ed86156e0b8", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/355", "iss_label": "", "title": "when using GUI recent version, A converting error has occurred.", "body": "I am testing the gui version downloaded today. But when converting, the following error has occurred.\r\nCan anyone tell me what I am doing wrong or how to solve it?\r\n\r\n(1) error message \r\n\"Failed to convert image: ...\\faceA_source_gui\\out1.png. Reason: argument of type 'NoneType' is not iterable\"\r\n\r\n(1) train image : \r\nhttps://imgur.com/tLB15CB\r\n\r\n(2) convert error image : \r\nhttps://imgur.com/OAzWKdR\r\n\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/352", "file_loc": "{'base_commit': '9dc151e5b58abb5f8862d2aa84124ed86156e0b8', 'files': [{'path': 'faceswap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [30]}}}, {'path': 'requirements-gpu-python35-cuda8.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}}}, {'path': 'requirements-gpu-python36-cuda9.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [10]}}}, {'path': 'requirements-python35.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}}}, {'path': 'requirements-python36.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [10]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {\"('ConvertImage', 'get_optional_arguments', 26)\": {'mod': [119]}}}, {'path': 'scripts/gui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 5, 377], 'mod': [1, 4, 10, 11, 12, 13, 14, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 40, 41, 42, 43, 44, 45, 46, 47]}, \"('TKGui', None, 423)\": {'add': [435, 470], 'mod': [424, 425, 426]}, \"('TKGui', 'extract_options', 436)\": {'add': [441], 'mod': [437, 438, 439, 443]}, \"('TKGui', 'process', 480)\": {'add': [482], 'mod': [481]}, \"('FaceswapGui', None, 49)\": {'mod': [49, 50, 51, 52, 70, 71, 72, 73, 74, 75, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 159, 160, 161, 162, 163, 165, 166, 167]}, \"('FaceswapGui', '__init__', 51)\": {'mod': [54, 56, 57, 58, 59, 60, 61, 62, 64, 65, 67, 68]}, \"('FaceswapGui', 'build_gui', 70)\": {'mod': [77, 78, 79, 80]}, \"('FaceswapGui', 'load_config', 97)\": {'mod': [98, 107, 108]}, \"('FaceswapGui', 'set_command_args', 110)\": {'mod': [111]}, \"('FaceswapGui', 'save_config', 118)\": {'mod': [119, 120, 121, 122, 132]}, \"('FaceswapGui', 'reset_config', 134)\": {'mod': [135]}, \"('FaceswapGui', 'clear_config', 145)\": {'mod': [146]}, \"('ActionFrame', None, 169)\": {'mod': [169, 170, 171, 172, 173, 174, 175, 177, 178, 179, 180, 217, 218, 219, 220]}, \"('ActionFrame', 'build_frame', 177)\": {'mod': [182, 183, 184, 185, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 204, 205, 206, 207, 209, 210, 211, 212, 213, 214, 215]}, \"('ActionFrame', 'add_util_buttons', 217)\": {'mod': [222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 233, 234, 235, 236, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249]}, \"('CommandTab', None, 251)\": {'mod': [252, 253, 254, 272, 273, 274, 275, 277, 278, 279, 280, 292, 293, 294, 295, 296, 297, 299, 300, 301, 303, 304, 305, 306, 307, 308, 309, 326, 327, 328, 331, 332]}, \"('CommandTab', 'build_tab', 260)\": {'mod': [261, 262, 264, 265, 267, 268]}, \"('CommandTab', 'add_right_frame', 277)\": {'mod': [282, 283, 285, 287, 288, 290]}, \"('CommandTab', 'build_tabs', 304)\": {'mod': [312, 314, 315, 317, 318, 320, 321, 322, 323]}, \"('CommandTab', 'build_control', 331)\": {'mod': [335, 341, 342, 344, 345, 352, 354, 355]}, \"('CommandTab', 'add_browser_buttons', 357)\": {'mod': [358, 359, 361, 362]}, \"('CommandTab', 'ask_folder', 365)\": {'mod': [366]}, \"('CommandTab', 'ask_load', 372)\": {'mod': [373]}, \"('FaceswapControl', None, 378)\": {'mod': [379, 380, 381, 382, 383, 384, 386, 387, 388, 390, 391]}, \"('FaceswapControl', 'execute_script', 390)\": {'mod': [393, 394, 396, 397, 398, 403, 405, 406, 407, 408, 410, 411, 412]}, \"('FaceswapControl', 'launch_faceswap', 410)\": {'mod': [414, 415, 416, 417, 418, 419, 420, 421]}, \"('TKGui', '__init__', 425)\": {'mod': [428, 431, 433]}, \"('TKGui', 'set_control_title', 449)\": {'mod': [450, 452]}, \"('TKGui', 'set_control', 456)\": {'mod': [457, 459, 467]}, \"('TKGui', 'parse_arguments', 470)\": {'mod': [476, 477, 478]}}}, {'path': 'scripts/train.py', 'status': 'modified', 'Loc': {\"('TrainingProcessor', 'get_argument_list', 40)\": {'add': [108]}, '(None, None, None)': {'mod': [7]}, \"('TrainingProcessor', 'process', 141)\": {'mod': [164, 165]}, \"('TrainingProcessor', 'show', 226)\": {'mod': [228]}}}, {'path': 'tools.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 29]}}}, {'path': 'tools/sort.py', 'status': 'modified', 'Loc': {\"('SortProcessor', None, 35)\": {'add': [40], 'mod': [721, 722, 723, 724, 803, 804]}, '(None, None, None)': {'mod': [1, 11, 13, 30, 31, 32, 33, 817, 818]}, \"(None, 'import_face_recognition', 17)\": {'mod': [18]}, \"(None, 'import_FaceLandmarksExtractor', 23)\": {'mod': [24]}, \"('SortProcessor', '__init__', 36)\": {'mod': [37]}, \"('SortProcessor', 'parse_arguments', 41)\": {'mod': [53, 54, 55, 56, 57, 59, 60, 61, 62, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 112, 113, 114, 115, 116, 117, 118, 119, 120, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 133, 134, 135, 136, 137, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161]}, \"('SortProcessor', 'add_optional_arguments', 166)\": {'mod': [167]}, \"('SortProcessor', 'process_arguments', 170)\": {'mod': [171, 177, 178, 181, 182, 185, 186, 189, 190, 192, 194, 196, 199, 203, 204]}, \"('SortProcessor', 'process', 208)\": {'mod': [214, 215, 216, 218, 219, 220, 221, 224, 226, 234]}, \"('SortProcessor', 'sort_blur', 237)\": {'mod': [238, 240, 241, 242]}, \"('SortProcessor', 'sort_face', 248)\": {'mod': [251, 253, 255, 258, 260, 261, 272, 273]}, \"('SortProcessor', 'sort_face_dissim', 277)\": {'mod': [280, 282, 284, 287, 289, 300]}, \"('SortProcessor', 'sort_face_cnn', 304)\": {'mod': [307, 309, 312, 314, 317, 319, 320, 324, 329]}, \"('SortProcessor', 'sort_face_cnn_dissim', 333)\": {'mod': [336, 338, 341, 343, 346, 348, 353, 357]}, \"('SortProcessor', 'sort_face_yaw', 362)\": {'mod': [363, 364, 373, 376, 378, 380]}, \"('SortProcessor', 'calc_landmarks_face_pitch', 363)\": {'mod': [366]}, \"('SortProcessor', 'calc_landmarks_face_yaw', 367)\": {'mod': [368, 369, 370]}, \"('SortProcessor', 'sort_hist', 385)\": {'mod': [386, 388, 390, 393, 395, 396, 397, 401]}, \"('SortProcessor', 'sort_hist_dissim', 405)\": {'mod': [406, 408, 410, 413, 415, 418, 423]}, \"('SortProcessor', 'group_blur', 429)\": {'mod': [431, 438, 439]}, \"('SortProcessor', 'group_face', 452)\": {'mod': [453, 465]}, \"('SortProcessor', 'group_face_cnn', 503)\": {'mod': [504, 517, 521]}, \"('SortProcessor', 'group_hist', 545)\": {'mod': [546, 555]}, \"('SortProcessor', 'final_process_rename', 578)\": {'mod': [579, 581, 584, 585, 587, 593, 595, 598, 600, 601, 605, 608, 610, 611]}, \"('SortProcessor', 'final_process_group', 613)\": {'mod': [614, 616, 620, 622, 623, 624, 626, 628, 632, 634, 636, 638, 639, 641, 642]}, \"('SortProcessor', 'reload_images', 645)\": {'mod': [657, 660, 662, 667, 670]}, \"('SortProcessor', 'find_images', 703)\": {'mod': [709]}, \"('SortProcessor', 'renaming', 759)\": {'mod': [762, 763]}, \"('SortProcessor', 'renaming', 769)\": {'mod': [772, 773]}, \"('SortProcessor', 'get_avg_score_hist', 778)\": {'mod': [783]}, \"('SortProcessor', 'get_avg_score_faces', 786)\": {'mod': [792]}, \"('SortProcessor', 'get_avg_score_faces_cnn', 795)\": {'mod': [798, 800]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["scripts/gui.py", "scripts/train.py", "faceswap.py", "tools.py", "tools/sort.py", "scripts/convert.py"], "doc": [], "test": [], "config": ["requirements-gpu-python36-cuda9.txt", "requirements-gpu-python35-cuda8.txt", "requirements-python35.txt", "requirements-python36.txt"], "asset": []}} {"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "b9478049b3e8644be2de93015476b9111126d683", "iss_has_pr": 1, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/660", "iss_label": "bug", "title": "gpt4free useless: IndexError: list index out of range", "body": "**Bug description**\nTelegram bot using gpt4free not working\nmain.py:\n```import telebot\nfrom gpt4free import usesless\n\nbot = telebot.TeleBot('my_token')\n\n@bot.message_handler(commands=['start'])\ndef send_welcome(message):\n bot.reply_to(message, \"ChatGPT unlimited and free but without memory\")\n\n@bot.message_handler()\ndef test(message):\n prompt = \"\"\n req = usesless.Completion.create(prompt=prompt)\n prompt = message.text\n bot.reply_to(message, req[\"text\"])\n\nif __name__ == \"__main__\":\n bot.polling()\n```\nError:\n```\nTraceback (most recent call last): File \"main.py\", line 20, in bot.polling()\n\nFile \"/home/runner/Test/venv/lib/python3.1\n\n0/site-packages/telebot/__init__.py\", line 1 043, in polling self.__threaded_polling (non_stop=non_sto p, interval=interval, timeout=timeout, long_\n\npolling_timeout-long_polling_timeout,\n\nFile \"/home/runner/Test/venv/lib/python3.1\n\n0/site-packages/telebot/__init__.py\", line 1 118, in __threaded_polling\n\nraise e\n\nFile \"/home/runner/Test/venv/lib/python3.1 0/site-packages/telebot/__init__.py\", line 1 074, in threaded_polling\n\nself.worker_pool.raise_exceptions() File \"/home/runner/Test/venv/lib/python3.1 0/site-packages/telebot/util.py\", line 147, in raise_exceptions\n\nraise self.exception_info File \"/home/runner/Test/venv/lib/python3.1 0/site-packages/telebot/util.py\", line 90, i n run\n\ntask(*args, **kwargs) File \"/home/runner/Test/venv/lib/python3.1 0/site-packages/telebot/__init__.py\", line 6 770, in _run_middlewares_and_handler result = handler['function'](message)\n\nFile \"main.py\", line 15, in test\n\nreq = usesless.Completion.create(prompt=prompt) File \"/home/runner/Test/venv/lib/python3.1 0/site-packages/gpt4free/usesless/__init__.py\", line 46, in create\n\nresponse = Completion.__response_to_json (content) File \"/home/runner/Test/venv/lib/python3.10/site-packages/gpt4free/usesless/__init__.py\", line 53, in __response_to_json split_text = text.rsplit(\"\\n\", 1)[1]\n\nIndexError: list index out of range\n```\n\n**Environement**\n- python version 3.10\n- server location Poland\n\n**Additional context**\nIf you need more information to help me, please let me know.", "pr_html_url": "https://github.com/xtekky/gpt4free/pull/664", "file_loc": "{'base_commit': 'b9478049b3e8644be2de93015476b9111126d683', 'files': [{'path': 'gpt4free/usesless/__init__.py', 'status': 'modified', 'Loc': {\"('Completion', '__response_to_json', 148)\": {'mod': [151, 152, 153]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["gpt4free/usesless/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "keras-team", "repo_name": "keras", "base_commit": "aab55e649c34f8a24f00ee63922d049d3417c979", "iss_has_pr": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/8304", "iss_label": "", "title": "HDF5 Normalizer not working.", "body": "```\r\ndef preprocess_train(array):\r\n \"\"\" Given a batch of numpy arrays, it outputs a batch of numpy of arrays with all preprocessing\r\n\r\n size : (w, h)\r\n \"\"\"\r\n num1 = np.random.randint(0, 128 - 112)\r\n num2 = np.random.randint(0, 171 - 112)\r\n crop = array[ :, num1:num1+112, num2:num2+112, :]\r\n crop = crop/255.0\r\n return crop\r\n```\r\n```\r\nX_train = HDF5Matrix(train_loc, 'images', start=0, normalizer=preprocess_train)\r\ny_train = HDF5Matrix(train_loc, 'labels')\r\n````\r\n```\r\nmodel_final.fit(X_train, y_train, batch_size=16, shuffle='batch', validation_data = [X_test, y_test], epochs=10)\r\n```\r\n```\r\nValueError: Error when checking model input: expected conv1_input to have shape (None, 16, 112, 112, 3) but got array with shape (5797, 16, 128, 171, 3)\r\n```\r\nBasically I have a h5py file with shape (5797, 16, 128, 171, 3) and my preprocess function should output (16, 112, 112, 3). this is not happening.\r\n\r\nHowever when I run only X_train and used X_train.__getitem___(1). It outputs an array with (16, 112, 112, 3) shape. \r\n\r\nNot sure where I am going wrong. Can someone help me ?", "pr_html_url": "https://github.com/keras-team/keras/pull/10749", "file_loc": "{'base_commit': 'aab55e649c34f8a24f00ee63922d049d3417c979', 'files': [{'path': 'keras/utils/io_utils.py', 'status': 'modified', 'Loc': {\"('HDF5Matrix', '__init__', 44)\": {'add': [60]}, \"('HDF5Matrix', 'shape', 98)\": {'mod': [104]}, \"('HDF5Matrix', 'dtype', 107)\": {'mod': [113]}}}, {'path': 'tests/keras/utils/io_utils_test.py', 'status': 'modified', 'Loc': {\"(None, 'test_io_utils', 43)\": {'add': [106]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["keras/utils/io_utils.py", "tests/keras/utils/io_utils_test.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "home-assistant", "repo_name": "core", "base_commit": "8c8feb95a9c9048d655bc1eb263f6bc6ee61ee74", "iss_has_pr": 1, "iss_html_url": "https://github.com/home-assistant/core/issues/4", "iss_label": "", "title": "Instructions don't result in homeassistant listening on any port", "body": "Neither of these result in homeassistant listening on port `8123`\n\n``` bash\npython3 -m homeassistant\npython3 -m homeassistant --config=config\n```\n\nIn fact, it isn't seeming to be listening on _any_ port.\n\n``` bash\n(ve)[jeff@omniscience home-assistant] (master)$ ./build_frontend\n(ve)[jeff@omniscience home-assistant] (master)$ git status\nOn branch master\nYour branch is up-to-date with 'origin/master'.\nChanges not staged for commit:\n (use \"git add/rm ...\" to update what will be committed)\n (use \"git checkout -- ...\" to discard changes in working directory)\n\n modified: build_frontend\n deleted: config/home-assistant.conf.example\n modified: homeassistant/components/http/frontend.py\n modified: homeassistant/components/http/www_static/frontend.html\n\nUntracked files:\n (use \"git add ...\" to include in what will be committed)\n\n ve/\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")\n(ve)[jeff@omniscience home-assistant] (master)$ python3 -m homeassistant --config config\nINFO:homeassistant.loader:Loaded component demo from homeassistant.components.demo\nERROR:homeassistant.loader:Error loading homeassistant.components.http\nTraceback (most recent call last):\n File \"/home/jeff/git/home-assistant/homeassistant/loader.py\", line 91, in _get_component\n comp = importlib.import_module(module)\n File \"/usr/lib64/python3.4/importlib/__init__.py\", line 109, in import_module\n return _bootstrap._gcd_import(name[level:], package, level)\n File \"\", line 2254, in _gcd_import\n File \"\", line 2237, in _find_and_load\n File \"\", line 2226, in _find_and_load_unlocked\n File \"\", line 1200, in _load_unlocked\n File \"\", line 1129, in _exec\n File \"\", line 1471, in exec_module\n File \"\", line 321, in _call_with_frames_removed\n File \"/home/jeff/git/home-assistant/homeassistant/components/http/__init__.py\", line 86, in \n import homeassistant.remote as rem\n File \"/home/jeff/git/home-assistant/homeassistant/remote.py\", line 18, in \n import requests\nImportError: No module named 'requests'\nERROR:homeassistant.loader:Unable to load component http\nINFO:homeassistant.loader:Loaded component group from homeassistant.components.group\nINFO:homeassistant.bootstrap:Home Assistant core initialized\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant.loader:Loaded component sun from homeassistant.components.sun\nERROR:homeassistant.components.sun:Error while importing dependency ephem.\nTraceback (most recent call last):\n File \"/home/jeff/git/home-assistant/homeassistant/components/sun.py\", line 66, in setup\n import ephem\nImportError: No module named 'ephem'\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nERROR:homeassistant:WorkerPool:All 4 threads are busy and 17 jobs pending\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nERROR:homeassistant:WorkerPool:All 4 threads are busy and 33 jobs pending\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant:Bus:Handling >\nINFO:homeassistant.bootstrap:component demo initialized\nINFO:homeassistant.bootstrap:component group initialized\nINFO:homeassistant:Bus:Handling \nINFO:homeassistant:Timer:starting\nINFO:homeassistant:Bus:Handling \n^C\n```\n\nSadly, I can't use the docker container due to [docker being broken in Fedora 21](https://github.com/docker/docker/issues/7952) right now. So while it is running, I tried `lsof -i tcp:8123` and `lsof -p $(pidof python3)`\n\nThis is with Python 3.4.1 on Fedora 21 (pre-release) x86_64.\n\nFYI: I work on python automation code and django apps for `$REAL_JOB` and would love to help you improve this software if at all possible. I've got a home Insteon network and have SONOS speakers throughout the house. Once I get this all working, one of the first things I'd like to write is the integration between this and the SONOS xml api\n", "pr_html_url": "https://github.com/home-assistant/core/pull/35811", "file_loc": "{'base_commit': '8c8feb95a9c9048d655bc1eb263f6bc6ee61ee74', 'files': [{'path': 'homeassistant/components/google_assistant/helpers.py', 'status': 'modified', 'Loc': {\"('GoogleEntity', 'sync_serialize', 393)\": {'add': [428]}}}, {'path': 'tests/components/google_assistant/test_helpers.py', 'status': 'modified', 'Loc': {\"(None, 'test_google_entity_sync_serialize_with_local_sdk', 25)\": {'mod': [47, 48, 49, 50, 51, 52, 53, 54, 55]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["homeassistant/components/google_assistant/helpers.py"], "doc": [], "test": ["tests/components/google_assistant/test_helpers.py"], "config": [], "asset": []}} -{"organization": "psf", "repo_name": "requests", "base_commit": "2203c3bccd5e4888a16d73247d540fd6e359d29c", "is_iss": 0, "iss_html_url": "https://github.com/psf/requests/issues/1", "iss_label": "", "title": "Cookie support?", "body": "An feature request (not found in documentation).\n\nDoes this support cookies?\n\nUsecase: I can integrate this module inside an existings framework. This framework generate for me the authentication/session cookie, so to perform request using requests there I need to add the same auth cookie already generated.\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/psf/requests/commit/2203c3bccd5e4888a16d73247d540fd6e359d29c", "file_loc": "{'base_commit': '2203c3bccd5e4888a16d73247d540fd6e359d29c', 'files': [{'path': 'requests/core.py', 'status': 'modified', 'Loc': {\"('Request', '__init__', 68)\": {'add': [76]}, \"('Request', None, 61)\": {'add': [101]}, \"('Request', '_get_opener', 101)\": {'mod': [108, 109, 112, 113, 114]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "\u5e76\u6ca1\u6709\u627e\u5230\u5bf9\u5e94\u7684pr\uff0c\u8fd9\u4e00\u884c\u63d0\u4f9b\u7684pr\u4e5f\u65e0\u6cd5\u89e3\u51b3issue\u95ee\u9898\uff0c\u5728issue\u4e2d\u89e3\u51b3\u95ee\u9898\u7684\u662f\u4e00\u4e2acommit", "info_type": ""}, "loctype": {"code": ["requests/core.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "psf", "repo_name": "requests", "base_commit": "2203c3bccd5e4888a16d73247d540fd6e359d29c", "iss_html_url": "https://github.com/psf/requests/issues/1", "iss_label": "", "title": "Cookie support?", "body": "An feature request (not found in documentation).\n\nDoes this support cookies?\n\nUsecase: I can integrate this module inside an existings framework. This framework generate for me the authentication/session cookie, so to perform request using requests there I need to add the same auth cookie already generated.\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/psf/requests/commit/2203c3bccd5e4888a16d73247d540fd6e359d29c", "file_loc": "{'base_commit': '2203c3bccd5e4888a16d73247d540fd6e359d29c', 'files': [{'path': 'requests/core.py', 'status': 'modified', 'Loc': {\"('Request', '__init__', 68)\": {'add': [76]}, \"('Request', None, 61)\": {'add': [101]}, \"('Request', '_get_opener', 101)\": {'mod': [108, 109, 112, 113, 114]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "\u5e76\u6ca1\u6709\u627e\u5230\u5bf9\u5e94\u7684pr\uff0c\u8fd9\u4e00\u884c\u63d0\u4f9b\u7684pr\u4e5f\u65e0\u6cd5\u89e3\u51b3issue\u95ee\u9898\uff0c\u5728issue\u4e2d\u89e3\u51b3\u95ee\u9898\u7684\u662f\u4e00\u4e2acommit", "info_type": ""}, "loctype": {"code": ["requests/core.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "psf", "repo_name": "requests", "base_commit": "ac4e05874a1a983ca126185a0e4d4e74915f792e", "iss_has_pr": 1, "iss_html_url": "https://github.com/psf/requests/issues/1859", "iss_label": "", "title": "Brittle test", "body": "The test `test_expires_valid_str` fails on my OS X box, in Python 2.7:\n\n``` python\n============================= test session starts ==============================\nplatform darwin -- Python 2.7.5 -- pytest-2.3.4\nplugins: cov\ncollected 116 items \n\ntest_requests.py .................................................................................................................F..\n\n=================================== FAILURES ===================================\n_______________ TestMorselToCookieExpires.test_expires_valid_str _______________\n\nself = \n\n def test_expires_valid_str(self):\n \"\"\"Test case where we convert expires from string time.\"\"\"\n\n morsel = Morsel()\n morsel['expires'] = 'Thu, 01-Jan-1970 00:00:01 GMT'\n cookie = morsel_to_cookie(morsel)\n> assert cookie.expires == 1\nE AssertionError: assert -3599 == 1\nE + where -3599 = Cookie(version=0, name=None, value=None, port=None, port_specified=False, domain='', domain_specified=False, domain_in...False, secure=False, expires=-3599, discard=False, comment='', comment_url=False, rest={'HttpOnly': ''}, rfc2109=False).expires\n\ntest_requests.py:1111: AssertionError\n==================== 1 failed, 115 passed in 23.32 seconds =====================\n```\n\nI've not yet got a good theory for this, though I think it's telling that the error is one hour. I don't know _what_ it's telling though, because time is complicated.\n\nAnyway, this test needs to be rewritten to be more accepting of breakage. It's also possible that the intermittent failure of this test represents a problem with the `morsel_to_cookie` function itself, in which case that needs rewriting.\n", "pr_html_url": "https://github.com/psf/requests/pull/1860", "file_loc": "{'base_commit': 'ac4e05874a1a983ca126185a0e4d4e74915f792e', 'files': [{'path': 'requests/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}, \"(None, 'morsel_to_cookie', 388)\": {'mod': [396, 397]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["requests/cookies.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "58ddd4338adf12a3abc2ffed0e27794a398fa8d2", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/994", "iss_label": "help wanted\nhacktoberfest", "title": "UnicodeDecodeError when using thefuck", "body": "I followed the alias guide, but I got an error when running thefuck in PowerShell:\r\n```\r\nTraceback (most recent call last):\r\n File \"d:\\python36\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"d:\\python36\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"D:\\Python36\\Scripts\\thefuck.exe\\__main__.py\", line 9, in \r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\entrypoints\\main.py\", line 26, in main\r\n fix_command(known_args)\r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\entrypoints\\fix_command.py\", line 36, in fix_command\r\n command = types.Command.from_raw_script(raw_command)\r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\types.py\", line 82, in from_raw_script\r\n output = get_output(script, expanded)\r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\output_readers\\__init__.py\", line 20, in get_output\r\n return rerun.get_output(script, expanded)\r\n File \"d:\\python36\\lib\\site-packages\\thefuck\\output_readers\\rerun.py\", line 62, in get_output\r\n output = result.stdout.read().decode('utf-8')\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xb2 in position 9: invalid start byte\r\n```", "pr_html_url": "https://github.com/nvbn/thefuck/pull/1214", "file_loc": "{'base_commit': '58ddd4338adf12a3abc2ffed0e27794a398fa8d2', 'files': [{'path': 'tests/output_readers/test_rerun.py', 'status': 'modified', 'Loc': {\"('TestRerun', None, 9)\": {'add': [24]}}}, {'path': 'thefuck/output_readers/rerun.py', 'status': 'modified', 'Loc': {\"(None, 'get_output', 45)\": {'mod': [63]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/output_readers/rerun.py"], "doc": [], "test": ["tests/output_readers/test_rerun.py"], "config": [], "asset": []}} {"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "c50287c23b3f35f54aa703823a8c3f9cbfc34377", "iss_has_pr": 1, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/233", "iss_label": "", "title": "Some faces with one eye hair covered can't be recognized", "body": "*First THANKS A LOT for all contributors' hard work!\r\n*Always make a compare test after big change, test with same source 1000 pics (kar801 -> kar1800) , compare with FakeApp1.1 & latest faceswap commit 232d931. \r\n*Test files [Link Removed]\r\n## Expected behavior\r\nNot sure, limitation ? or possible to improve ? \r\n\r\n## Actual behavior\r\nFakeApp1.1 extract rate is 988/1000\r\nfaceswap -D cnn extract rate is 943/1000\r\n\r\n[Image Removed]\r\n\r\nNotice that some faces - specially one eye covered by hair can't be extract. Example: kar1086 -> kar1090, these 5 pics can be extract normally in FakeApp, but failed in faceswap. Compare kar1085 with kar1086, no big gap in these 2 pics, just corner of the eye be covered by hair in kar1086. \u00a0\r\n\r\n## Steps to reproduce\r\npython faceswap.py extract -i D:/project4/data_A1/ -o D:/project4/data_A1/output/ -D cnn\r\n\r\n## Other relevant information\r\n\r\n- **Operating system and version:** Windows, macOS, Linux \r\nWindows10\r\nPython3.6.4\r\nCUDA9.0\r\ndlib 19.9\r\nThe others env same as requirements-gpu-python36-cuda9.txt\r\n\r\n", "pr_html_url": "https://github.com/deepfakes/faceswap/pull/236", "file_loc": "{'base_commit': 'c50287c23b3f35f54aa703823a8c3f9cbfc34377', 'files': [{'path': 'lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13], 'mod': [11, 12]}, \"(None, 'extract', 114)\": {'add': [162, 170], 'mod': [114, 115, 117, 118, 121, 123, 124, 125, 126, 127, 129, 130, 132, 133, 136, 139, 141, 143, 145, 146, 147, 148, 149, 150, 151, 152, 153, 155, 156, 157, 158, 161, 165, 169, 172]}, \"(None, 'onExit', 16)\": {'mod': [17, 18, 25, 26, 28, 29]}}}, {'path': 'lib/ModelAE.py', 'status': 'modified', 'Loc': {\"('TrainerAE', 'show_sample', 69)\": {'add': [80], 'mod': [82]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py", "lib/ModelAE.py"], "doc": [], "test": [], "config": [], "asset": []}} @@ -239,114 +239,114 @@ {"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "4c2a566acc37c8d95b07c023f8c52a1a2a5d15bf", "iss_has_pr": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/2186", "iss_label": "bug\nneeds investigation\nAPI access", "title": "Azure support broken?", "body": "### \u26a0\ufe0f Search for existing issues first \u26a0\ufe0f\r\n\r\n- [X] I have searched the existing issues, and there is no existing issue for my problem\r\n\r\n### GPT-3 or GPT-4\r\n\r\n- [ ] I am using Auto-GPT with GPT-3 (GPT-3.5)\r\n\r\n### Steps to reproduce \ud83d\udd79\r\n\r\n```yaml\r\nazure.yaml:\r\nazure_api_type: azure\r\nazure_api_base: https://test.openai.azure.com/\r\nazure_api_version: 2023-03-15-preview\r\nazure_model_map:\r\n fast_llm_model_deployment_id: \"gpt-35-turbo\"\r\n smart_llm_model_deployment_id: \"gpt-4\"\r\n embedding_model_deployment_id: \"emb-ada\" \r\n```\r\n\r\n### Current behavior \ud83d\ude2f\r\n\r\nWhen I run \"python -m autogpt\", it just broken\r\nWelcome back! Would you like me to return to being Entrepreneur-GPT?\r\nContinue with the last settings?\r\nName: Entrepreneur-GPT\r\nRole: an AI designed to autonomously develop and run businesses with the\r\nGoals: ['Increase net worth', 'Grow Twitter Account', 'Develop and manage multiple businesses autonomously']\r\nContinue (y/n): y\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\n\r\nTraceback (most recent call last):\r\n File \"\", line 198, in _run_module_as_main\r\n File \"\", line 88, in _run_code\r\n File \"/data/Auto-GPT/autogpt/__main__.py\", line 50, in \r\n main()\r\n File \"/data/Auto-GPT/autogpt/__main__.py\", line 46, in main\r\n agent.start_interaction_loop()\r\n File \"/data/Auto-GPT/autogpt/agent/agent.py\", line 75, in start_interaction_loop\r\n assistant_reply = chat_with_ai(\r\n ^^^^^^^^^^^^^\r\n File \"/data/Auto-GPT/autogpt/chat.py\", line 159, in chat_with_ai\r\n assistant_reply = create_chat_completion(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/Auto-GPT/autogpt/llm_utils.py\", line 84, in create_chat_completion\r\n deployment_id=CFG.get_azure_deployment_id_for_model(model),\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/data/Auto-GPT/autogpt/config/config.py\", line 120, in get_azure_deployment_id_for_model\r\n return self.azure_model_to_deployment_id_map[\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nTypeError: list indices must be integers or slices, not str\r\n\r\n\r\n### Expected behavior \ud83e\udd14\r\n\r\nIt should works well.\r\n\r\n### Your prompt \ud83d\udcdd\r\n\r\n```yaml\r\n# Paste your prompt here\r\n```\r\n", "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/2351", "file_loc": "{'base_commit': '4c2a566acc37c8d95b07c023f8c52a1a2a5d15bf', 'files': [{'path': 'autogpt/config/config.py', 'status': 'modified', 'Loc': {\"('Config', 'load_azure_config', 136)\": {'mod': [157]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["autogpt/config/config.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "nvbn", "repo_name": "thefuck", "base_commit": "f966ecd4f5b8221ee15e843f5ec287e1f7cca940", "iss_has_pr": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/740", "iss_label": "", "title": "wrong suggestion with git push --set-upstream ", "body": "Thefuck is incorrectly adding the remote name at the end of the command suggestion:\r\n\r\n```\r\n$ git push myfork\r\nfatal: The current branch test-branch has no upstream branch.\r\nTo push the current branch and set the remote as upstream, use\r\n\r\n git push --set-upstream myfork test-branch\r\n\r\n$ fuck\r\ngit push --set-upstream myfork test-branch myfork [enter/\u2191/\u2193/ctrl+c]\r\nerror: src refspec myfork does not match any.\r\nerror: failed to push some refs to 'git@github.com:waldyrious/project-foo.git'\r\n```", "pr_html_url": "https://github.com/nvbn/thefuck/pull/745", "file_loc": "{'base_commit': 'f966ecd4f5b8221ee15e843f5ec287e1f7cca940', 'files': [{'path': 'tests/rules/test_git_push.py', 'status': 'modified', 'Loc': {\"(None, 'test_get_new_command', 23)\": {'add': [25]}}}, {'path': 'thefuck/rules/git_push.py', 'status': 'modified', 'Loc': {\"(None, 'get_new_command', 22)\": {'add': [34]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": ["thefuck/rules/git_push.py"], "doc": [], "test": ["tests/rules/test_git_push.py"], "config": [], "asset": []}} {"organization": "huggingface", "repo_name": "transformers", "base_commit": "e4b234834a79541f31be227aadce13f5aafda85a", "iss_has_pr": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/16497", "iss_label": "WIP", "title": "[TODO] Investigate equivalence tests", "body": "**(add a lot of assignees just to make you informed and kept updated in the future. Don't hesitate to remove yourself if you think it's irrelevant)**\r\n\r\nCurrently the PT/TF/Flax equivalence tests use `1e-5` as the tolerance for the absolute differences of outputs.\r\n\r\nWe see that these tests failed with a non-negligible (although not carefully defined) frequency.\r\n\r\nCreate this page to track a list of models to investigate.\r\n\r\n- **FlaxWav2Vec2ModelTest** (2.2888184e-05 > 1e-5)\r\n - https://app.circleci.com/pipelines/github/huggingface/transformers/37363/workflows/a4b06424-0ba8-4fbc-9054-6ff52fbf8145/jobs/411654 \r\n\r\n- **TFGPT2EncoderDecoderModelTest** (0.001009281724691391 > 1e-3)\r\n - https://app.circleci.com/pipelines/github/huggingface/transformers/37358/workflows/43c12161-33d8-4df5-ba3c-3e62a4507ee7/jobs/411579\r\n - This also happens to **TFBERTEncoderDecoderModelTest**\r\n - This is caused by some sequence in a batch which gets all 0s as attention mask (generated by ids_tensor) - may happens on both encoder and decoder (especially after combining with the causal mask).\r\n - For **TFBERTEncoderDecoderModelTest**, the difference is smaller than *TFGPT2EncoderDecoderModelTest* (by a magnitude of 5x~10x) -> this is due to the last hidden states in GPT2 is after layer norm (not the case for BERT).\r\n - If we look the cross attention diff between PT/TF, it is clear that we have the same issue (both in the magnitude of `1e-3`)\r\n - The encoder attention diff between PT/TF is in the magnitude of `5e-8`: ~~**not very sure why this doesn't get much larger**~~.\r\n - This is because PT/TF (at least in BERT) has different `encoder_extended_attention_mask`: `1e-4` vs `1e-9`.\r\n\r\n- **TFViTMAEModelTest** (1.013279e-05 > 1e-5)\r\n - https://app.circleci.com/pipelines/github/huggingface/transformers/37319/workflows/5adfba7a-d12b-4e1e-9a7a-e33c7d5fd6ee/jobs/411002", "pr_html_url": "https://github.com/huggingface/transformers/pull/16517", "file_loc": "{'base_commit': 'e4b234834a79541f31be227aadce13f5aafda85a', 'files': [{'path': 'templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}, \"(None, 'prepare_config_and_inputs', 90)\": {'mod': [95]}}}, {'path': 'tests/albert/test_modeling_tf_albert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}, \"('TFAlbertModelTester', 'prepare_config_and_inputs', 94)\": {'mod': [99]}}}, {'path': 'tests/bert/test_modeling_tf_bert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}, \"('TFBertModelTester', 'prepare_config_and_inputs', 94)\": {'mod': [99]}}}, {'path': 'tests/clip/test_modeling_tf_clip.py', 'status': 'modified', 'Loc': {\"('TFCLIPTextModelTester', 'prepare_config_and_inputs', 298)\": {'add': [303]}}}, {'path': 'tests/convbert/test_modeling_tf_convbert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFConvBertModelTester', 'prepare_config_and_inputs', 92)\": {'mod': [97]}}}, {'path': 'tests/ctrl/test_modeling_tf_ctrl.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFCTRLModelTester', 'prepare_config_and_inputs', 67)\": {'mod': [72]}}}, {'path': 'tests/deberta/test_modeling_tf_deberta.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFDebertaModelTester', 'prepare_config_and_inputs', 90)\": {'mod': [95]}}}, {'path': 'tests/deberta_v2/test_modeling_tf_deberta_v2.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFDebertaV2ModelTester', 'prepare_config_and_inputs', 93)\": {'mod': [98]}}}, {'path': 'tests/distilbert/test_modeling_tf_distilbert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFDistilBertModelTester', 'prepare_config_and_inputs', 68)\": {'mod': [73]}}}, {'path': 'tests/dpr/test_modeling_tf_dpr.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}, \"('TFDPRModelTester', 'prepare_config_and_inputs', 92)\": {'mod': [97, 98, 99]}}}, {'path': 'tests/electra/test_modeling_tf_electra.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFElectraModelTester', 'prepare_config_and_inputs', 69)\": {'mod': [74]}}}, {'path': 'tests/flaubert/test_modeling_tf_flaubert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}, \"('TFFlaubertModelTester', 'prepare_config_and_inputs', 76)\": {'mod': [78]}}}, {'path': 'tests/funnel/test_modeling_tf_funnel.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFFunnelModelTester', 'prepare_config_and_inputs', 109)\": {'mod': [114]}}}, {'path': 'tests/gpt2/test_modeling_tf_gpt2.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}, \"('TFGPT2ModelTester', 'prepare_config_and_inputs', 72)\": {'mod': [77]}}}, {'path': 'tests/gptj/test_modeling_tf_gptj.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFGPTJModelTester', 'prepare_config_and_inputs', 68)\": {'mod': [73]}}}, {'path': 'tests/layoutlm/test_modeling_tf_layoutlm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}, \"('TFLayoutLMModelTester', 'prepare_config_and_inputs', 90)\": {'mod': [110]}}}, {'path': 'tests/longformer/test_modeling_tf_longformer.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFLongformerModelTester', 'prepare_config_and_inputs', 77)\": {'mod': [82]}}}, {'path': 'tests/lxmert/test_modeling_tf_lxmert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [26]}, \"('TFLxmertModelTester', 'prepare_config_and_inputs', 119)\": {'mod': [127]}}}, {'path': 'tests/mobilebert/test_modeling_tf_mobilebert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFMobileBertModelTester', 'prepare_config_and_inputs', 112)\": {'mod': [117]}}}, {'path': 'tests/mpnet/test_modeling_tf_mpnet.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFMPNetModelTester', 'prepare_config_and_inputs', 88)\": {'mod': [93]}}}, {'path': 'tests/openai/test_modeling_tf_openai.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFOpenAIGPTModelTester', 'prepare_config_and_inputs', 68)\": {'mod': [73]}}}, {'path': 'tests/rembert/test_modeling_tf_rembert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFRemBertModelTester', 'prepare_config_and_inputs', 93)\": {'mod': [98]}}}, {'path': 'tests/roberta/test_modeling_tf_roberta.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFRobertaModelTester', 'prepare_config_and_inputs', 70)\": {'mod': [75]}}}, {'path': 'tests/roformer/test_modeling_tf_roformer.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFRoFormerModelTester', 'prepare_config_and_inputs', 93)\": {'mod': [98]}}}, {'path': 'tests/t5/test_modeling_tf_t5.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFT5ModelTester', 'prepare_config_and_inputs', 56)\": {'mod': [61]}}}, {'path': 'tests/tapas/test_modeling_tf_tapas.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [41]}, \"('TFTapasModelTester', 'prepare_config_and_inputs', 156)\": {'mod': [161]}}}, {'path': 'tests/test_modeling_tf_common.py', 'status': 'modified', 'Loc': {\"(None, 'random_attention_mask', 1440)\": {'mod': [1443]}}}, {'path': 'tests/xlm/test_modeling_tf_xlm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}, \"('TFXLMModelTester', 'prepare_config_and_inputs', 76)\": {'mod': [78]}}}, {'path': 'tests/xlnet/test_modeling_tf_xlnet.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25]}, \"('TFXLNetModelTester', 'prepare_config_and_inputs', 74)\": {'mod': [78]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": ""}, "loctype": {"code": [], "doc": [], "test": ["tests/test_modeling_tf_common.py", "templates/adding_a_new_model/cookiecutter-template-{{cookiecutter.modelname}}/test_modeling_tf_{{cookiecutter.lowercase_modelname}}.py", "tests/openai/test_modeling_tf_openai.py", "tests/funnel/test_modeling_tf_funnel.py", "tests/convbert/test_modeling_tf_convbert.py", "tests/bert/test_modeling_tf_bert.py", "tests/roformer/test_modeling_tf_roformer.py", "tests/t5/test_modeling_tf_t5.py", "tests/lxmert/test_modeling_tf_lxmert.py", "tests/mpnet/test_modeling_tf_mpnet.py", "tests/rembert/test_modeling_tf_rembert.py", "tests/layoutlm/test_modeling_tf_layoutlm.py", "tests/dpr/test_modeling_tf_dpr.py", "tests/gptj/test_modeling_tf_gptj.py", "tests/roberta/test_modeling_tf_roberta.py", "tests/flaubert/test_modeling_tf_flaubert.py", "tests/clip/test_modeling_tf_clip.py", "tests/tapas/test_modeling_tf_tapas.py", "tests/deberta/test_modeling_tf_deberta.py", "tests/electra/test_modeling_tf_electra.py", "tests/gpt2/test_modeling_tf_gpt2.py", "tests/xlm/test_modeling_tf_xlm.py", "tests/longformer/test_modeling_tf_longformer.py", "tests/deberta_v2/test_modeling_tf_deberta_v2.py", "tests/distilbert/test_modeling_tf_distilbert.py", "tests/albert/test_modeling_tf_albert.py", "tests/xlnet/test_modeling_tf_xlnet.py", "tests/mobilebert/test_modeling_tf_mobilebert.py", "tests/ctrl/test_modeling_tf_ctrl.py"], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e", "is_iss": 0, "iss_html_url": "https://github.com/pallets/flask/issues/1971", "iss_label": "", "title": "Implement RFC 7233", "body": "It would be great to support [RFC 7233 : Hypertext Transfer Protocol (HTTP/1.1): Range Requests](https://tools.ietf.org/html/rfc7233) for next major version, at least for non multipart/byteranges media type.\n\nI'm willing to implement this, so please share your thoughts about this.\n\nWhat must be done:\n- Modify `send_file` method to support Range Requests\n - Use existing `conditionnal` parameter to enable Range Requests support ?\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/2031", "commit_html_url": null, "file_loc": "{'base_commit': '01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e', 'files': [{'path': 'CHANGES', 'status': 'modified', 'Loc': {'(None, None, 20)': {'add': [20]}}}, {'path': 'flask/helpers.py', 'status': 'modified', 'Loc': {\"(None, 'send_file', 430)\": {'add': [448, 502], 'mod': [538, 544, 578]}, '(None, None, None)': {'mod': [28, 29]}}}, {'path': 'tests/test_helpers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}, \"('TestSendfile', None, 356)\": {'add': [464]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/helpers.py"], "doc": ["CHANGES"], "test": ["tests/test_helpers.py"], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "673e5af658cf029e82d87047dcb7ebee3d343d10", "is_iss": 0, "iss_html_url": "https://github.com/pallets/flask/issues/2823", "iss_label": "", "title": "Flask complains a .env file exists when not using python-dotenv, even though that .env is a directory", "body": "I place my virtualenvs in a `.env` directory in my project directory. Flask 1.x sees this directory and thinks it might be a \"dotenv\" file (even though it is a directory).\r\n\r\n### Expected Behavior\r\n\r\n`flask` should ignore a `.env` directory when `python-dotenv` is not installed.\r\n\r\n### Actual Behavior\r\n\r\n`flask` says:\r\n\r\n> * Tip: There are .env files present. Do \"pip install python-dotenv\" to use them.\r\n\r\n### Environment\r\n\r\n* Python version: 3.6.5\r\n* Flask version: 1.0.2\r\n* Werkzeug version: 0.14.1\r\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/2827", "commit_html_url": null, "file_loc": "{'base_commit': '673e5af658cf029e82d87047dcb7ebee3d343d10', 'files': [{'path': 'flask/cli.py', 'status': 'modified', 'Loc': {\"(None, 'load_dotenv', 567)\": {'mod': [587]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/cli.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "8e589daaf2cec6a10262b8ff88801127f2fa14fd", "is_iss": 0, "iss_html_url": "https://github.com/pallets/flask/issues/4220", "iss_label": "", "title": "`template_filter` decorator typing does not support custom filters with multiple arguments", "body": "`template_filter` decorator typing does not support custom filters that take in multiple arguments. Consider:\r\n\r\n```py\r\nfrom flask import Flask\r\n\r\n\r\napp = Flask(__name__)\r\n\r\n\r\n@app.template_filter('foo_bar')\r\ndef foo_bar_filter(foo, bar):\r\n return f'{foo} {bar}'\r\n```\r\n`mypy` will return the following error message:\r\n```\r\nerror: Argument 1 has incompatible type \"Callable[[Any, Any], Any]\"; expected \"Callable[[Any], str]\" [arg-type]\r\n```\r\nAs custom filters with multiple arguments are supported by Jinja (https://jinja.palletsprojects.com/en/3.0.x/api/#custom-filters), I think this typing error is a false positive.\r\n\r\nEnvironment:\r\n\r\n- Python version: 3.6.13\r\n- Flask version: 2.0.1\r\n- Mypy version: 0.812\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pallets/flask/commit/8e589daaf2cec6a10262b8ff88801127f2fa14fd", "file_loc": "{'base_commit': '8e589daaf2cec6a10262b8ff88801127f2fa14fd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 10)': {'add': [10]}}}, {'path': 'src/flask/typing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [43, 44, 45]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/flask/typing.py"], "doc": ["CHANGES.rst"], "test": [], "config": [], "asset": []}} -{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "a4df5010f49044eb1f1713057e8914e6a5a104b3", "is_iss": 0, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1073", "iss_label": "false positive", "title": "producthunt.com false positive", "body": "\r\n\r\n## Checklist\r\n\r\n- [X] I'm reporting a website that is returning **false positive** results\r\n- [X] I've checked for similar site support requests including closed ones\r\n- [X] I've checked for pull requests attempting to fix this false positive\r\n- [X] I'm only reporting **one** site (create a seperate issue for each site)\r\n\r\n## Description\r\n\r\n\r\nhttps://www.producthunt.com/@adasaaakzzzzzzzzsdsdsdasdadadasqe22aasd\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/sherlock-project/sherlock/commit/a4df5010f49044eb1f1713057e8914e6a5a104b3", "file_loc": "{'base_commit': 'a4df5010f49044eb1f1713057e8914e6a5a104b3', 'files': [{'path': 'sherlock/resources/data.json', 'status': 'modified', 'Loc': {'(None, None, 1159)': {'mod': [1159]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sherlock/resources/data.json"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "d2803c0fb7d0ba9361dcba8eb9bcebbf2f774958", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/11023", "iss_label": "", "title": "Cannot load_model", "body": "Thank you!\r\n\r\n- [ ] Check that you are up-to-date with the master branch of Keras. You can update with:\r\npip install git+git://github.com/keras-team/keras.git --upgrade --no-deps\r\n\r\n- [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found [here](https://www.tensorflow.org/get_started/os_setup).\r\n\r\n- [ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\r\npip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\r\n\r\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\r\n\r\nI am using Google Colab to train a CNN and then save the entire model to a `.h5` file. The code is available here: [CNN-Colab](https://gist.github.com/abhisheksoni27/184c49ca703eb124e1b17eb8dd8af518)\r\n\r\nThe model gets saved but when I later try to load it back, I get the following error:\r\n\r\n```\r\nTypeError: float() argument must be a string or a number, not 'dict'\r\n```\r\n\r\nThe entire Output log is here: [CNN - Colab - Error](https://gist.github.com/abhisheksoni27/732bec240629d2dd721e80130cb2956b)\r\n", "code": null, "pr_html_url": "https://github.com/keras-team/keras/pull/10727", "commit_html_url": null, "file_loc": "{'base_commit': 'd2803c0fb7d0ba9361dcba8eb9bcebbf2f774958', 'files': [{'path': 'keras/engine/saving.py', 'status': 'modified', 'Loc': {\"(None, 'get_json_type', 61)\": {'mod': [82, 83]}}}, {'path': 'tests/test_model_saving.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14, 643]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/engine/saving.py"], "doc": [], "test": ["tests/test_model_saving.py"], "config": [], "asset": []}} -{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "b28ece0f34e54d1c980e31223451f3b2f0f20ff9", "is_iss": 0, "iss_html_url": "https://github.com/nvbn/thefuck/issues/1021", "iss_label": "", "title": "Git checkout should provide multiple corrections", "body": "When correcting git checkout, the default is to use the 'closest branch'. We have a lot of branches with similar names, but quite often, what I actually meant to do was supply the '-b' flag.\r\n\r\nCan the git checkout rule be updated to return all of the possible options, rather than trying to guess, based on some arbitrary priority?\r\n", "code": null, "pr_html_url": "https://github.com/nvbn/thefuck/pull/1022", "commit_html_url": null, "file_loc": "{'base_commit': 'b28ece0f34e54d1c980e31223451f3b2f0f20ff9', 'files': [{'path': 'tests/rules/test_git_checkout.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [59, 62, 66, 70]}}}, {'path': 'thefuck/rules/git_checkout.py', 'status': 'modified', 'Loc': {\"(None, 'get_new_command', 31)\": {'add': [36], 'mod': [38, 39, 40, 41, 42, 43]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/git_checkout.py"], "doc": [], "test": ["tests/rules/test_git_checkout.py"], "config": [], "asset": []}} -{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "2d81166213c403dce5c04d1fb73ba5d3e57d6676", "is_iss": 0, "iss_html_url": "https://github.com/nvbn/thefuck/issues/660", "iss_label": "", "title": "Slow execution time", "body": "The command output is very slow on macOS w/ fish shell. Reproduction rate is ~80% for me.\r\n\r\nVersion: The Fuck 3.18 using Python 2.7.10\r\nShell: fish, version 2.6.0\r\nOS: macOS 10.12.5\r\nDebug Output:\r\n```\r\n\u276f fuck 333ms\r\nDEBUG: Run with settings: {'alter_history': True,\r\n 'debug': True,\r\n 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},\r\n 'exclude_rules': [],\r\n 'history_limit': None,\r\n 'no_colors': False,\r\n 'priority': {},\r\n 'repeat': False,\r\n 'require_confirmation': True,\r\n 'rules': [],\r\n 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],\r\n 'user_dir': PosixPath('/Users/sbennett/.config/thefuck'),\r\n 'wait_command': 3,\r\n 'wait_slow_command': 15}\r\nDEBUG: Execution timed out!\r\nDEBUG: Call: fish -ic \"fuck\"; with env: {'PYTHONIOENCODING': 'utf-8', 'VERSIONER_PYTHON_PREFER_32_BIT': 'no', 'TERM_PROGRAM_VERSION': '3.0.15', 'LOGNAME': 'sbennett', 'USER': 'sbennett', 'HOME': '/Users/sbennett', 'PATH': '/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin', 'TERM_PROGRAM': 'iTerm.app', 'LANG': 'C', 'THEFUCK_DEBUG': 'true', 'TERM': 'xterm-256color', 'Apple_PubSub_Socket_Render': '/private/tmp/com.apple.launchd.1eq3gwtm7Y/Render', 'COLORFGBG': '15;0', 'VERSIONER_PYTHON_VERSION': '2.7', 'SHLVL': '2', 'XPC_FLAGS': '0x0', 'ITERM_SESSION_ID': 'w1t1p0:E781FA41-C385-4CCE-A9E0-EBF190B3D246', 'TERM_SESSION_ID': 'w1t1p0:E781FA41-C385-4CCE-A9E0-EBF190B3D246', 'SSH_AUTH_SOCK': '/private/tmp/com.apple.launchd.leMomVKppy/Listeners', 'TF_ALIAS': 'fuck', 'XPC_SERVICE_NAME': '0', 'SHELL': '/usr/local/bin/fish', 'ITERM_PROFILE': 'Default', 'LC_ALL': 'C', 'TMPDIR': '/var/folders/0s/c0f2hl495352w24847p7ybwm35h1r_/T/', 'GIT_TRACE': '1', '__CF_USER_TEXT_ENCODING': '0x658070A:0x0:0x0', 'PWD': '/Users/sbennett'}; is slow: took: 0:00:03.018166\r\nDEBUG: Importing rule: ag_literal; took: 0:00:00.000511\r\nDEBUG: Importing rule: apt_get; took: 0:00:00.000571\r\nDEBUG: Importing rule: apt_get_search; took: 0:00:00.000224\r\nDEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000715\r\nDEBUG: Importing rule: aws_cli; took: 0:00:00.000235\r\nDEBUG: Importing rule: brew_install; took: 0:00:00.000279\r\nDEBUG: Importing rule: brew_link; took: 0:00:00.000217\r\nDEBUG: Importing rule: brew_uninstall; took: 0:00:00.000276\r\nDEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000105\r\nDEBUG: Importing rule: brew_update_formula; took: 0:00:00.000222\r\nDEBUG: Importing rule: brew_upgrade; took: 0:00:00.000061\r\nDEBUG: Importing rule: cargo; took: 0:00:00.000049\r\nDEBUG: Importing rule: cargo_no_command; took: 0:00:00.000223\r\nDEBUG: Importing rule: cd_correction; took: 0:00:00.000950\r\nDEBUG: Importing rule: cd_mkdir; took: 0:00:00.000342\r\nDEBUG: Importing rule: cd_parent; took: 0:00:00.000050\r\nDEBUG: Importing rule: chmod_x; took: 0:00:00.000058\r\nDEBUG: Importing rule: composer_not_command; took: 0:00:00.001520\r\nDEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000677\r\nDEBUG: Importing rule: cpp11; took: 0:00:00.000324\r\nDEBUG: Importing rule: dirty_untar; took: 0:00:00.001812\r\nDEBUG: Importing rule: dirty_unzip; took: 0:00:00.000257\r\nDEBUG: Importing rule: django_south_ghost; took: 0:00:00.000066\r\nDEBUG: Importing rule: django_south_merge; took: 0:00:00.000113\r\nDEBUG: Importing rule: docker_not_command; took: 0:00:00.000528\r\nDEBUG: Importing rule: dry; took: 0:00:00.000068\r\nDEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000396\r\nDEBUG: Importing rule: fix_alt_space; took: 0:00:00.000337\r\nDEBUG: Importing rule: fix_file; took: 0:00:00.003110\r\nDEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000506\r\nDEBUG: Importing rule: git_add; took: 0:00:00.000520\r\nDEBUG: Importing rule: git_add_force; took: 0:00:00.000252\r\nDEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000249\r\nDEBUG: Importing rule: git_branch_delete; took: 0:00:00.000232\r\nDEBUG: Importing rule: git_branch_exists; took: 0:00:00.000309\r\nDEBUG: Importing rule: git_branch_list; took: 0:00:00.000236\r\nDEBUG: Importing rule: git_checkout; took: 0:00:00.000254\r\nDEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000238\r\nDEBUG: Importing rule: git_diff_staged; took: 0:00:00.000228\r\nDEBUG: Importing rule: git_fix_stash; took: 0:00:00.000252\r\nDEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000231\r\nDEBUG: Importing rule: git_help_aliased; took: 0:00:00.000231\r\nDEBUG: Importing rule: git_not_command; took: 0:00:00.000363\r\nDEBUG: Importing rule: git_pull; took: 0:00:00.000242\r\nDEBUG: Importing rule: git_pull_clone; took: 0:00:00.000239\r\nDEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000244\r\nDEBUG: Importing rule: git_push; took: 0:00:00.000246\r\nDEBUG: Importing rule: git_push_force; took: 0:00:00.000238\r\nDEBUG: Importing rule: git_push_pull; took: 0:00:00.000221\r\nDEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000343\r\nDEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000250\r\nDEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000164\r\nDEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000159\r\nDEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000241\r\nDEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000493\r\nDEBUG: Importing rule: git_rm_staged; took: 0:00:00.000347\r\nDEBUG: Importing rule: git_stash; took: 0:00:00.000286\r\nDEBUG: Importing rule: git_stash_pop; took: 0:00:00.000281\r\nDEBUG: Importing rule: git_tag_force; took: 0:00:00.000268\r\nDEBUG: Importing rule: git_two_dashes; took: 0:00:00.000239\r\nDEBUG: Importing rule: go_run; took: 0:00:00.000217\r\nDEBUG: Importing rule: gradle_no_task; took: 0:00:00.000566\r\nDEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000227\r\nDEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000235\r\nDEBUG: Importing rule: grep_recursive; took: 0:00:00.000222\r\nDEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000479\r\nDEBUG: Importing rule: gulp_not_task; took: 0:00:00.000227\r\nDEBUG: Importing rule: has_exists_script; took: 0:00:00.000240\r\nDEBUG: Importing rule: heroku_not_command; took: 0:00:00.000310\r\nDEBUG: Importing rule: history; took: 0:00:00.000067\r\nDEBUG: Importing rule: hostscli; took: 0:00:00.000383\r\nDEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000296\r\nDEBUG: Importing rule: java; took: 0:00:00.000226\r\nDEBUG: Importing rule: javac; took: 0:00:00.000216\r\nDEBUG: Importing rule: lein_not_task; took: 0:00:00.000370\r\nDEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000237\r\nDEBUG: Importing rule: ln_s_order; took: 0:00:00.000241\r\nDEBUG: Importing rule: ls_all; took: 0:00:00.000208\r\nDEBUG: Importing rule: ls_lah; took: 0:00:00.000347\r\nDEBUG: Importing rule: man; took: 0:00:00.000241\r\nDEBUG: Importing rule: man_no_space; took: 0:00:00.000062\r\nDEBUG: Importing rule: mercurial; took: 0:00:00.000234\r\nDEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000085\r\nDEBUG: Importing rule: mkdir_p; took: 0:00:00.000252\r\nDEBUG: Importing rule: mvn_no_command; took: 0:00:00.000213\r\nDEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000260\r\nDEBUG: Importing rule: no_command; took: 0:00:00.000261\r\nDEBUG: Importing rule: no_such_file; took: 0:00:00.000066\r\nDEBUG: Importing rule: npm_missing_script; took: 0:00:00.000593\r\nDEBUG: Importing rule: npm_run_script; took: 0:00:00.000235\r\nDEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000378\r\nDEBUG: Importing rule: open; took: 0:00:00.000605\r\nDEBUG: Importing rule: pacman; took: 0:00:00.000366\r\nDEBUG: Importing rule: pacman_not_found; took: 0:00:00.000111\r\nDEBUG: Importing rule: path_from_history; took: 0:00:00.000099\r\nDEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000315\r\nDEBUG: Importing rule: port_already_in_use; took: 0:00:00.000183\r\nDEBUG: Importing rule: python_command; took: 0:00:00.000261\r\nDEBUG: Importing rule: python_execute; took: 0:00:00.000232\r\nDEBUG: Importing rule: quotation_marks; took: 0:00:00.000052\r\nDEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000224\r\nDEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000051\r\nDEBUG: Importing rule: rm_dir; took: 0:00:00.000242\r\nDEBUG: Importing rule: rm_root; took: 0:00:00.000235\r\nDEBUG: Importing rule: scm_correction; took: 0:00:00.000254\r\nDEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000222\r\nDEBUG: Importing rule: sl_ls; took: 0:00:00.000052\r\nDEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000239\r\nDEBUG: Importing rule: sudo; took: 0:00:00.000059\r\nDEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000231\r\nDEBUG: Importing rule: switch_lang; took: 0:00:00.000091\r\nDEBUG: Importing rule: systemctl; took: 0:00:00.000378\r\nDEBUG: Importing rule: test.py; took: 0:00:00.000051\r\nDEBUG: Importing rule: tmux; took: 0:00:00.000212\r\nDEBUG: Importing rule: touch; took: 0:00:00.000223\r\nDEBUG: Importing rule: tsuru_login; took: 0:00:00.000281\r\nDEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000223\r\nDEBUG: Importing rule: unknown_command; took: 0:00:00.000062\r\nDEBUG: Importing rule: vagrant_up; took: 0:00:00.000308\r\nDEBUG: Importing rule: whois; took: 0:00:00.000282\r\nDEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000309\r\nDEBUG: Importing rule: yarn_alias; took: 0:00:00.000219\r\nDEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000494\r\nDEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000357\r\nDEBUG: Importing rule: yarn_help; took: 0:00:00.000232\r\nDEBUG: Trying rule: dirty_unzip; took: 0:00:00.000568\r\nNo fucks given\r\nDEBUG: Total took: 0:00:03.282835\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/2d81166213c403dce5c04d1fb73ba5d3e57d6676", "file_loc": "{'base_commit': '2d81166213c403dce5c04d1fb73ba5d3e57d6676', 'files': [{'path': 'tests/shells/test_fish.py', 'status': 'modified', 'Loc': {\"('TestFish', 'test_get_overridden_aliases', 29)\": {'mod': [31, 32]}}}, {'path': 'thefuck/shells/fish.py', 'status': 'modified', 'Loc': {\"('Fish', '_get_overridden_aliases', 40)\": {'mod': [46]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/shells/fish.py"], "doc": [], "test": ["tests/shells/test_fish.py"], "config": [], "asset": []}} -{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "6da0bc557f0fd94ea1397d3a7f508be896cc98d8", "is_iss": 0, "iss_html_url": "https://github.com/nvbn/thefuck/issues/1120", "iss_label": "", "title": "Trying rule missing_space_before_subcommand taking so long", "body": "\r\n\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.29 using Python 3.8.2 and ZSH 5.8\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n ubuntu 20.04 on wsl2\r\n\r\nHow to reproduce the bug:\r\n\r\n env THEFUCK_DEBUG=true thefuck test\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\n DEBUG: Trying rule: missing_space_before_subcommand; took: 0:00:08.341279\r\n No fucks given\r\n\r\nAnything else you think is relevant:\r\n\r\nI have no idea why this taking so long. anyone else having this problem?\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/KiaraGrouwstra/thefuck/commit/6da0bc557f0fd94ea1397d3a7f508be896cc98d8", "file_loc": "{'base_commit': '6da0bc557f0fd94ea1397d3a7f508be896cc98d8', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 436)': {'add': [436]}, '(None, None, 468)': {'add': [468]}}}, {'path': 'tests/test_conf.py', 'status': 'modified', 'Loc': {\"('TestSettingsFromEnv', 'test_from_env', 48)\": {'add': [67], 'mod': [57]}}}, {'path': 'tests/test_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [96]}}}, {'path': 'thefuck/conf.py', 'status': 'modified', 'Loc': {\"('Settings', '_val_from_env', 91)\": {'mod': [104]}}}, {'path': 'thefuck/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [46, 61]}}}, {'path': 'thefuck/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [106]}, \"(None, 'get_all_executables', 112)\": {'add': [121]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/conf.py", "thefuck/utils.py", "thefuck/const.py"], "doc": ["README.md"], "test": ["tests/test_conf.py", "tests/test_utils.py"], "config": [], "asset": []}} -{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "a84671dd3b7505d4d73f11ee9c7d057429542e24", "is_iss": 0, "iss_html_url": "https://github.com/nvbn/thefuck/issues/20", "iss_label": "", "title": "Some Unicode error in Ubuntu 14.10", "body": "``` bash\n$ apt-get update\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u043e\u0442\u043a\u0440\u044b\u0442\u044c \u0444\u0430\u0439\u043b \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0438 /var/lib/apt/lists/lock - open (13: \u041e\u0442\u043a\u0430\u0437\u0430\u043d\u043e \u0432 \u0434\u043e\u0441\u0442\u0443\u043f\u0435)\nE: \u041d\u0435\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e \u0437\u0430\u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u043a\u0430\u0442\u0430\u043b\u043e\u0433 /var/lib/apt/lists/\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u043e\u0442\u043a\u0440\u044b\u0442\u044c \u0444\u0430\u0439\u043b \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0438 /var/lib/dpkg/lock - open (13: \u041e\u0442\u043a\u0430\u0437\u0430\u043d\u043e \u0432 \u0434\u043e\u0441\u0442\u0443\u043f\u0435)\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u0432\u044b\u043f\u043e\u043b\u043d\u0438\u0442\u044c \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0443 \u0443\u043f\u0440\u0430\u0432\u043b\u044f\u044e\u0449\u0435\u0433\u043e \u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0430 (/var/lib/dpkg/); \u0443 \u0432\u0430\u0441 \u0435\u0441\u0442\u044c \u043f\u0440\u0430\u0432\u0430 \u0441\u0443\u043f\u0435\u0440\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044f?\n$ fuck\nTraceback (most recent call last):\n File \"/usr/local/bin/thefuck\", line 9, in \n load_entry_point('thefuck==1.7', 'console_scripts', 'thefuck')()\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/main.py\", line 91, in main\n matched_rule = get_matched_rule(command, rules, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/main.py\", line 67, in get_matched_rule\n if rule.match(command, settings):\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/utils.py\", line 41, in wrapper\n return fn(command, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/rules/no_command.py\", line 19, in match\n output = _get_output(command, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/rules/no_command.py\", line 13, in _get_output\n return result.stderr.read().decode()\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/a84671dd3b7505d4d73f11ee9c7d057429542e24", "file_loc": "{'base_commit': 'a84671dd3b7505d4d73f11ee9c7d057429542e24', 'files': [{'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'thefuck/rules/no_command.py', 'status': 'modified', 'Loc': {\"(None, '_get_output', 9)\": {'mod': [13]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/no_command.py", "setup.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "622298549172754afff07a8ea1f55358062e17a7", "is_iss": 0, "iss_html_url": "https://github.com/nvbn/thefuck/issues/330", "iss_label": "", "title": "Add command options (--version, --help, --update/--upgrade)", "body": "And perhaps a manpage too, even if it only says \"Please use fuck --help for documentation\"\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/622298549172754afff07a8ea1f55358062e17a7", "file_loc": "{'base_commit': '622298549172754afff07a8ea1f55358062e17a7', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 110)': {'mod': [110]}, '(None, None, 112)': {'mod': [112]}}}, {'path': 'thefuck/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 3], 'mod': [83, 99, 100]}, \"(None, 'print_alias', 100)\": {'add': [101]}, \"(None, 'fix_command', 86)\": {'mod': [97]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/main.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "284d49da8d0ab3252b5426423b608033d39c2669", "is_iss": 0, "iss_html_url": "https://github.com/nvbn/thefuck/issues/786", "iss_label": "next release", "title": "\"TypeError: 'module' object is not callable\" On any invocation of thefuck", "body": "\r\n\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0`):\r\n\r\n The Fuck 3.25 using Python 3.6.4+\r\n\r\nYour shell and its version (`bash`, `zsh`, *Windows PowerShell*, etc.):\r\n\r\n GNU bash, version 4.4.18(1)-release (x86_64-pc-linux-gnu)\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n Ubuntu 18.04, Bionic Beaver\r\n\r\nHow to reproduce the bug:\r\n\r\n Execute any bad command (I tested with `cd..` and `apt install whatever`. Then enter `fuck`.\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\n```\r\nDEBUG: Run with settings: {'alter_history': True,\r\n 'debug': True,\r\n 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},\r\n 'exclude_rules': [],\r\n 'history_limit': None,\r\n 'instant_mode': False,\r\n 'no_colors': False,\r\n 'priority': {},\r\n 'repeat': False,\r\n 'require_confirmation': True,\r\n 'rules': [],\r\n 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],\r\n 'user_dir': PosixPath('/home/thomasokeeffe/.config/thefuck'),\r\n 'wait_command': 3,\r\n 'wait_slow_command': 15}\r\nDEBUG: Received output: \r\nDEBUG: Call: export THEFUCK_DEBUG=true; with env: {'CLUTTER_IM_MODULE': 'xim', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_MENU_PREFIX': 'gnome-', 'LANG': 'C', 'GDM_LANG': 'en_US', 'MANAGERPID': '1425', 'DISPLAY': ':0', 'INVOCATION_ID': '09b52cf5b26f4acf8d4fcf48e96663bb', 'UNITY_DEFAULT_PROFILE': 'unity', 'COMPIZ_CONFIG_PROFILE': 'ubuntu', 'GTK2_MODULES': 'overlay-scrollbar', 'DOOMWADDIR': '/opt/doom', 'GTK_CSD': '0', 'COLORTERM': 'truecolor', 'TF_SHELL_ALIASES': 'alias alert=\\'notify-send --urgency=low -i \"$([ $? = 0 ] && echo terminal || echo error)\" \"$(history|tail -n1|sed -e \\'\\\\\\'\\'s/^\\\\s*[0-9]\\\\+\\\\s*//;s/[;&|]\\\\s*alert$//\\'\\\\\\'\\')\"\\'\\nalias dfhack=\\'~/df_linux/dfhack\\'\\nalias dwarff=\\'/home/thomasokeeffe/df_linux/df\\'\\nalias egrep=\\'egrep --color=auto\\'\\nalias fgrep=\\'fgrep --color=auto\\'\\nalias grep=\\'grep --color=auto\\'\\nalias l=\\'ls -CF\\'\\nalias la=\\'ls -A\\'\\nalias ll=\\'ls -alF\\'\\nalias ls=\\'ls --color=auto\\'\\nalias pip=\\'pip3\\'\\nalias python=\\'python3\\'', 'JAVA_HOME': '/usr/lib/jvm/java-8-oracle/', 'J2SDKDIR': '/usr/lib/jvm/java-9-oracle', 'PYTHONIOENCODING': 'utf-8', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'MANDATORY_PATH': '/usr/share/gconf/unity.mandatory.path', 'XDG_GREETER_DATA_DIR': '/var/lib/lightdm-data/thomasokeeffe', 'DERBY_HOME': '/usr/lib/jvm/java-9-oracle/db', 'USER': 'thomasokeeffe', 'DESKTOP_SESSION': 'unity', 'QT4_IM_MODULE': 'xim', 'TEXTDOMAINDIR': '/usr/share/locale/', 'DEFAULTS_PATH': '/usr/share/gconf/unity.default.path', 'PWD': '/home/thomasokeeffe', 'HOME': '/home/thomasokeeffe', 'JOURNAL_STREAM': '9:28556', 'TEXTDOMAIN': 'im-config', 'J2REDIR': '/usr/lib/jvm/java-9-oracle', 'QT_ACCESSIBILITY': '1', 'XDG_SESSION_TYPE': 'x11', 'COMPIZ_BIN_PATH': '/usr/bin/', 'XDG_DATA_DIRS': '/usr/share/unity:/usr/share/unity:/usr/local/share:/usr/share:/var/lib/snapd/desktop:/var/lib/snapd/desktop', 'XDG_SESSION_DESKTOP': 'unity', 'WINEDEBUG': '-all', 'SSH_AGENT_LAUNCHER': 'gnome-keyring', 'GTK_MODULES': 'gail:atk-bridge:unity-gtk-module', 'GNOME_SESSION_XDG_SESSION_PATH': '/org/freedesktop/DisplayManager/Session0', 'TERM': 'xterm-256color', 'VTE_VERSION': '5002', 'SHELL': '/bin/bash', 'XDG_SEAT_PATH': '/org/freedesktop/DisplayManager/Seat0', 'QT_IM_MODULE': 'ibus', 'XMODIFIERS': '@im=ibus', 'IM_CONFIG_PHASE': '2', 'XDG_CURRENT_DESKTOP': 'Unity:Unity7:ubuntu', 'GPG_AGENT_INFO': '/home/thomasokeeffe/.gnupg/S.gpg-agent:0:1:', 'TF_ALIAS': 'fuck', 'UNITY_HAS_3D_SUPPORT': 'true', 'SHLVL': '2', 'LANGUAGE': 'en_US', 'WINDOWID': '67108870', 'GDMSESSION': 'unity', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'LOGNAME': 'thomasokeeffe', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'XDG_RUNTIME_DIR': '/run/user/1000', 'XAUTHORITY': '/home/thomasokeeffe/.Xauthority', 'TF_HISTORY': '\\t python\\n\\t fuck\\n\\t source ~/.bashrc\\n\\t fuck\\n\\t apt install whatever\\n\\t fuck\\n\\t cd..\\n\\t fuck\\n\\t fuck --version\\n\\t export THEFUCK_DEBUG=true', 'XDG_SESSION_PATH': '/org/freedesktop/DisplayManager/Session0', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-unity:/etc/xdg/xdg-unity:/etc/xdg', 'PATH': '/usr/bin/ski:/home/thomasokeeffe/.local/bin:/opt/doom:/usr/bin/python3:/usr/bin/ski:/home/thomasokeeffe/.local/bin:/opt/doom:/usr/bin/python3:/home/thomasokeeffe/.local/share/umake/bin:/home/thomasokeeffe/bin:/home/thomasokeeffe/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-9-oracle/bin:/usr/lib/jvm/java-9-oracle/db/bin', 'THEFUCK_DEBUG': 'true', 'LD_PRELOAD': 'libgtk3-nocsd.so.0', 'SESSION_MANAGER': 'local/Wirecat:@/tmp/.ICE-unix/1738,unix/Wirecat:/tmp/.ICE-unix/1738', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'GTK_IM_MODULE': 'ibus', '_': '/home/thomasokeeffe/.local/bin/thefuck', 'LC_ALL': 'C', 'GIT_TRACE': '1'}; is slow: took: 0:00:00.001356\r\nDEBUG: Importing rule: ag_literal; took: 0:00:00.000609\r\nDEBUG: Importing rule: apt_get; took: 0:00:00.001838\r\nDEBUG: Total took: 0:00:00.028332\r\nTraceback (most recent call last):\r\n File \"/home/thomasokeeffe/.local/bin/thefuck\", line 11, in \r\n sys.exit(main())\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/entrypoints/main.py\", line 25, in main\r\n fix_command(known_args)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/entrypoints/fix_command.py\", line 41, in fix_command\r\n corrected_commands = get_corrected_commands(command)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 89, in get_corrected_commands\r\n corrected for rule in get_rules()\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 49, in get_rules\r\n key=lambda rule: rule.priority)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 17, in get_loaded_rules\r\n rule = Rule.from_path(path)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/types.py\", line 140, in from_path\r\n rule_module = load_source(name, str(path))\r\n File \"/usr/lib/python3.6/imp.py\", line 172, in load_source\r\n module = _load(spec)\r\n File \"\", line 696, in _load\r\n File \"\", line 677, in _load_unlocked\r\n File \"\", line 678, in exec_module\r\n File \"\", line 219, in _call_with_frames_removed\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/rules/apt_get.py\", line 8, in \r\n command_not_found = CommandNotFound()\r\nTypeError: 'module' object is not callable\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/fb39d0bbd349e916ae12a77f04efd151dd046e6b\n\nhttps://github.com/nvbn/thefuck/commit/284d49da8d0ab3252b5426423b608033d39c2669", "file_loc": "{'base_commit': '284d49da8d0ab3252b5426423b608033d39c2669', 'files': [{'path': 'tests/rules/test_apt_get.py', 'status': 'modified', 'Loc': {\"(None, 'test_match', 13)\": {'mod': [15, 16, 17]}, \"(None, 'test_not_match', 30)\": {'mod': [33, 34, 35]}, \"(None, 'test_get_new_command', 49)\": {'mod': [52, 53, 54]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": ["tests/rules/test_apt_get.py"], "config": [], "asset": []}} -{"organization": "home-assistant", "repo_name": "core", "base_commit": "2b5e7c26111e447c2714284151c2e7555abd11e4", "is_iss": 0, "iss_html_url": "https://github.com/home-assistant/core/issues/27175", "iss_label": "integration: google_assistant", "title": "Google assistant: something went wrong when using alarm", "body": "\r\n\r\n**Home Assistant release with the issue:**\r\n0.100.0b0\r\n\r\n\r\n\r\n\r\n**Last working Home Assistant release (if known):**\r\n\r\n\r\n**Operating environment (Hass.io/Docker/Windows/etc.):**\r\nhassio\r\n\r\n**Integration:**\r\n\r\nnabu casa cloud\r\ngoogle assistant\r\nenvisalink\r\n\r\n**Description of problem:**\r\nUsing the google assistant to arm home/arm away/disarm causes the google assistant to indicate that \"something went wrong\" although it actually performed the action.\r\nI am using the envisalink component which allows you to specify the code so that it is sent with each service call. I tried with/without the code configuration and it made no difference. \r\n\r\n\r\n**Problem-relevant `configuration.yaml` entries and (fill out even if it seems unimportant):**\r\n```yaml\r\n\r\n```\r\n\r\n**Traceback (if applicable):**\r\n```\r\n\r\n```\r\n\r\n**Additional information:**\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/36942", "commit_html_url": null, "file_loc": "{'base_commit': '2b5e7c26111e447c2714284151c2e7555abd11e4', 'files': [{'path': 'homeassistant/components/google_assistant/trait.py', 'status': 'modified', 'Loc': {\"('ArmDisArmTrait', None, 974)\": {'add': [990, 1000]}, \"('ArmDisArmTrait', 'sync_attributes', 1001)\": {'mod': [1005]}, \"('ArmDisArmTrait', 'execute', 1031)\": {'mod': [1034, 1038]}}}, {'path': 'tests/components/google_assistant/test_trait.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1033]}, \"(None, 'test_arm_disarm_arm_away', 865)\": {'mod': [876, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913]}, \"(None, 'test_arm_disarm_disarm', 1035)\": {'mod': [1046, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/google_assistant/trait.py"], "doc": [], "test": ["tests/components/google_assistant/test_trait.py"], "config": [], "asset": []}} -{"organization": "home-assistant", "repo_name": "core", "base_commit": "fb7fb0ea78ee335cd23f3647223a675718ccf048", "is_iss": 0, "iss_html_url": "https://github.com/home-assistant/core/issues/40316", "iss_label": "integration: knx", "title": "KNX problem with 0.115.0 and 0.115.1", "body": "## The problem\r\nKNX integration has changed behavior and don't work fine:\r\n1) it is possible to read the status of a scene only if it is launched from the KNX bus but not if it is launched from the HA\r\n2) KNX climate don't read operation_mode_state_address correctly, when the operation mode is changed it reads the correct state then it is changed to \"standby\"\r\n\r\n## Environment\r\nHome Assistant 0.115.1\r\nFrontend: 20200917.1 - latest\r\nRaspberry 3\r\narch | armv7l\r\nchassis | embedded\r\ndev | false\r\ndocker | true\r\ndocker_version | 19.03.11\r\nhassio | true\r\nhost_os | HassOS 4.13\r\ninstallation_type | Home Assistant OS\r\nos_name | Linux\r\nos_version | 4.19.127-v7\r\npython_version | 3.8.5\r\nsupervisor | 245\r\ntimezone | Europe/Rome\r\nversion | 0.115.1\r\nvirtualenv | false\r\n\r\n- Home Assistant Core release with the issue: 0.115.1\r\n- Last working Home Assistant Core release (if known): 0.113.3\r\n- Operating environment (OS/Container/Supervised/Core): 4.12 \r\n- Integration causing this issue: KNX\r\n- Link to integration documentation on our website: https://www.home-assistant.io/integrations/knx/\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/40472", "commit_html_url": null, "file_loc": "{'base_commit': 'fb7fb0ea78ee335cd23f3647223a675718ccf048', 'files': [{'path': 'homeassistant/components/knx/manifest.json', 'status': 'modified', 'Loc': {'(None, None, 5)': {'mod': [5]}}}, {'path': 'requirements_all.txt', 'status': 'modified', 'Loc': {'(None, None, 2268)': {'mod': [2268]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Doc\nJson"}, "loctype": {"code": ["homeassistant/components/knx/manifest.json"], "doc": [], "test": [], "config": ["requirements_all.txt"], "asset": []}} -{"organization": "home-assistant", "repo_name": "core", "base_commit": "faedba04079d2c999a479118b5189ef4c0bff060", "is_iss": 0, "iss_html_url": "https://github.com/home-assistant/core/issues/77928", "iss_label": "integration: velux\nstale", "title": "Somfy blind motors cannot be assigned to a room", "body": "### The problem\n\nSomfy motors will return `None` as serial number via the Velux KLF-200:\r\n[Handle devices without serial numbers.](https://github.com/Julius2342/pyvlx/pull/42/commits/d409d66db8732553e928f5dd9d00d458ba638dea)\r\n\r\nThis serial is usesd as unique id here:\r\n[core/homeassistant/components/velux/__init__.py#L114](https://github.com/home-assistant/core/blob/dev/homeassistant/components/velux/__init__.py#L114)\r\n\r\nCould it be reasonable to return the node name instead of `None`?\r\n```python\r\n if self.node.serial_number:\r\n return self.node.serial_number\r\n elif self.node.name:\r\n return self.node.name\r\n else:\r\n return \"velux_#\" + str(self.node.node_id)\r\n```\n\n### What version of Home Assistant Core has the issue?\n\n2022.8.7\n\n### What was the last working version of Home Assistant Core?\n\n_No response_\n\n### What type of installation are you running?\n\nHome Assistant OS\n\n### Integration causing the issue\n\nVelux\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/velux/\n\n### Diagnostics information\n\n_No response_\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\nRelated issues:\r\n[66262](https://github.com/home-assistant/core/issues/66262)\r\n[35935](https://github.com/home-assistant/core/issues/35935)\r\n[74009](https://github.com/home-assistant/core/issues/74009)\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/117508", "commit_html_url": null, "file_loc": "{'base_commit': 'faedba04079d2c999a479118b5189ef4c0bff060', 'files': [{'path': 'homeassistant/components/velux/__init__.py', 'status': 'modified', 'Loc': {\"('VeluxEntity', None, 106)\": {'mod': [111]}, \"('VeluxEntity', '__init__', 111)\": {'mod': [114]}}}, {'path': 'homeassistant/components/velux/cover.py', 'status': 'modified', 'Loc': {\"(None, 'async_setup_entry', 26)\": {'mod': [32]}, \"('VeluxCover', None, 38)\": {'mod': [44]}, \"('VeluxCover', '__init__', 44)\": {'mod': [46]}}}, {'path': 'homeassistant/components/velux/light.py', 'status': 'modified', 'Loc': {\"(None, 'async_setup_entry', 19)\": {'mod': [26]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/velux/light.py", "homeassistant/components/velux/cover.py", "homeassistant/components/velux/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "home-assistant", "repo_name": "core", "base_commit": "551a584ca69771804b6f094eceb67dcb25a2f627", "is_iss": 0, "iss_html_url": "https://github.com/home-assistant/core/issues/68620", "iss_label": "needs-more-information\nintegration: overkiz", "title": "Polling interval for stateless (e.g. Somfy (Oceania)) is not applied in Overkiz", "body": "### The problem\n\nEvery day I get a \"Gateway ID\" error in Overkiz error that reads as below. Same problem as [#66606](https://github.com/home-assistant/core/issues/66606) \r\n\r\n\"Translation Error: The intl string context variable \"gateway id\" was not provided to the string \"Gateway: {gateway id}\" Overkiz (by Somfy)\". \r\n\r\nWhen I click \"Reconfigure\" and reenter my password, the problem is corrected. But then it reoccurs in the next day or so.\r\n\r\nLooking at the log, it seems like there's some really aggressive polling going on? \r\n\r\n\n\n### What version of Home Assistant Core has the issue?\n\ncore-2022.3.5\n\n### What was the last working version of Home Assistant Core?\n\n_No response_\n\n### What type of installation are you running?\n\nHome Assistant Supervised\n\n### Integration causing the issue\n\nOverkiz (by Somfy)\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/overkiz\n\n### Diagnostics information\n\n[config_entry-overkiz-0bf20335f9aeaa86644cb071861f6ef1.json.txt](https://github.com/home-assistant/core/files/8341651/config_entry-overkiz-0bf20335f9aeaa86644cb071861f6ef1.json.txt)\r\n\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/133617", "commit_html_url": null, "file_loc": "{'base_commit': '551a584ca69771804b6f094eceb67dcb25a2f627', 'files': [{'path': 'homeassistant/components/overkiz/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [43]}, \"(None, 'async_setup_entry', 57)\": {'mod': [116, 117, 118, 119, 122]}}}, {'path': 'homeassistant/components/overkiz/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [46]}}}, {'path': 'homeassistant/components/overkiz/coordinator.py', 'status': 'modified', 'Loc': {\"('OverkizDataUpdateCoordinator', None, 36)\": {'add': [38]}, \"('OverkizDataUpdateCoordinator', '__init__', 39)\": {'add': [67], 'mod': [48, 62, 63, 64]}, \"('OverkizDataUpdateCoordinator', '_async_update_data', 69)\": {'add': [104], 'mod': [106]}, '(None, None, None)': {'add': [126], 'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/overkiz/const.py", "homeassistant/components/overkiz/__init__.py", "homeassistant/components/overkiz/coordinator.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "f7590d47641cedbf630b909aa8f53930c4a9ce5c", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/983", "iss_label": "site-bug", "title": "VRV - NoneType object is not iterable", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n\r\n- [X] I'm reporting a bug unrelated to a specific site\r\n- [X] I've verified that I'm running yt-dlp version **2021.09.02**\r\n- [X] I've checked that all provided URLs are alive and playable in a browser\r\n- [X] The provided URLs do not contain any DRM to the best of my knowledge\r\n- [X] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [X] I've searched the bugtracker for similar bug reports including closed ones\r\n- [X] I've read bugs section in FAQ\r\n\r\n\r\n## Verbose log\r\n\r\n\r\n\r\n```\r\nytdl -F -u PRIVATE -p PRIVATE \"https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend\" --verbose\r\n[debug] Command-line config: ['-F', '-u', 'PRIVATE', '-p', 'PRIVATE', 'https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend', '--verbose']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8, pref cp1252\r\n[debug] yt-dlp version 2021.09.02 (exe)\r\n[debug] Python version 3.8.10 (CPython 64bit) - Windows-10-10.0.18363-SP0\r\n[debug] exe versions: ffmpeg 4.4-full_build-www.gyan.dev, ffprobe 4.4-full_build-www.gyan.dev\r\n[debug] Optional libraries: mutagen, pycryptodome, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[vrv] None: Downloading webpage\r\n[vrv] Downloading Token Credentials JSON metadata\r\n[debug] [vrv] Extracting URL: https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend\r\n[vrv] GRP5G39JR: Downloading resource path JSON metadata\r\n[vrv] GRP5G39JR: Downloading CMS Signing JSON metadata\r\n[vrv] GRP5G39JR: Downloading object JSON metadata\r\n[vrv] GRP5G39JR: Downloading video JSON metadata\r\n[vrv] GRP5G39JR: Downloading streams JSON metadata\r\n[vrv] GRP5G39JR: Downloading dash-audio-en-US information\r\n[vrv] GRP5G39JR: Downloading hls-audio-en-US information\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\nERROR: 'NoneType' object is not iterable\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\YoutubeDL.py\", line 1214, in wrapper\r\n File \"yt_dlp\\YoutubeDL.py\", line 1239, in __extract_info\r\n File \"yt_dlp\\extractor\\common.py\", line 584, in extract\r\n File \"yt_dlp\\extractor\\vrv.py\", line 221, in _real_extract\r\nTypeError: 'NoneType' object is not iterable\r\n```\r\n\r\n\r\n\r\n## Description\r\n\r\n\r\n\r\nI've noticed this is happening a little more often but it seems that the entire series for this one does this but then it works just fine on other series. So haven't really noticed where this is hanging up but i used `--write-pages` and got an extra dump file for this one vs. one that actually downloads, which looks like this.\r\n\r\n```\r\n\r\n \r\n \r\n \r\n VRV - Home of Your Favorite Channels\r\n \r\n
    What is Vrv?Vrv
    View All

    Watch The Best Stuff Ever

    The best in anime, gaming, tech, cartoons, +\u00a0more! Create a free account to keep watching across our apps, build a watchlist, or go premium to sync & watch videos offline.

    What's Available on VRV?\u00a0View All

    Create Account

    Create My Free Account

    Create Account
    Existing user?

    By creating an account you\u2019re agreeing to our Terms & Privacy Policy, and you confirm that you are at least 16 years of age.

    \"Miss
    \"CrunchyrollCrunchyroll

    Miss Kobayashi's Dragon Maid

    SeriesSubtitled

    Miss Kobayashi is your average office worker who lives a boring life, alone in her small apartment\u2013until she saves the life of a female dragon in distress. The dragon, named Tohru, has the ability to magically transform into an adorable human girl (albeit with horns and a long tail!), who will do anything to pay off her debt of gratitude, whether Miss Kobayashi likes it or not. With a very persistent and amorous dragon as a roommate, nothing comes easy, and Miss Kobayashi\u2019s normal life is about to go off the deep end!

    Most Popular

    TSUKIMICHI -Moonlit Fantasy-

    You\u2019ve reached the end of the feed.

    View All

    VRV Premium

    Everything on VRV, Ad-free, $9.99 +tax/mo

    Get newest episodes, exclusive series, and ad-free viewing to everything on VRV such as HarmonQuest, Dragon Ball Super, Bravest Warriors, and more!
    30-DAY FREE TRIAL
    \r\n
    \"Close

    Ancient browser detected!

    Some old stuff is cool. Stuff like Stonehenge, ancient remains,\r\n and that picture of your dad next to that sweet car. What's not\r\n cool? Old browsers. VRV doesn't work on old browsers, so it looks\r\n like it's time for an upgrade. Here are some we officially\r\n support.

    • \"Google
      Google Chrome

      Version 55+

      GET IT
    • \"Mozilla
      Mozilla Firefox

      Version 60+

      GET IT
    • \"Microsoft
      Microsoft Edge

      Version 15+

      GET IT
    • \"Apple
      Apple Safari

      Version 10+

      GET IT
    \r\n \r\n
    \r\n \r\n
    \r\n \r\n ```\r\n\r\nNot sure itf it is helpful but that's all I got for now.", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/f7590d47641cedbf630b909aa8f53930c4a9ce5c", "file_loc": "{'base_commit': 'f7590d47641cedbf630b909aa8f53930c4a9ce5c', 'files': [{'path': 'yt_dlp/extractor/vrv.py', 'status': 'modified', 'Loc': {\"('VRVIE', '_real_extract', 168)\": {'mod': [221]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/vrv.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "50e93e03a7ca6ae35a319ea310104f7d6d91eee3", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/3183", "iss_label": "geo-blocked\nsite-bug", "title": "Tele5 has an extraction error", "body": "### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Description\n\ntrying to download the curernt andromenda series:\r\n`yt-dlp -F https://tele5.de/mediathek/gene-roddenberrys-andromeda/`\r\n`[Tele5] gene-roddenberrys-andromeda: Downloading webpage`\r\n`ERROR: gene-roddenberrys-andromeda: An extractor error has occurred. (caused by KeyError('assetid')); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the \"Broken site\" issue template properly. Confirm you are on the latest ver`\r\n\r\n\n\n### Verbose log\n\n```shell\nERROR: gene-roddenberrys-andromeda: An extractor error has occurred. (caused by KeyError('assetid')); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the \"Broken site\" issue template properly. Confirm you are on the latest version using yt-dlp -U\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/common.py\", line 617, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/tele5.py\", line 81, in _real_extract\r\n asset_id, country, realm = (player_info[x] for x in ('assetid', 'locale', 'realm', ))\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/tele5.py\", line 81, in \r\n asset_id, country, realm = (player_info[x] for x in ('assetid', 'locale', 'realm', ))\r\nKeyError: 'assetid'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/50e93e03a7ca6ae35a319ea310104f7d6d91eee3", "file_loc": "{'base_commit': '50e93e03a7ca6ae35a319ea310104f7d6d91eee3', 'files': [{'path': 'yt_dlp/YoutubeDL.py', 'status': 'modified', 'Loc': {}}, {'path': 'yt_dlp/extractor/aliexpress.py', 'status': 'modified', 'Loc': {\"('AliExpressLiveIE', None, 12)\": {'mod': [21]}}}, {'path': 'yt_dlp/extractor/applepodcasts.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 6]}, \"('ApplePodcastsIE', None, 15)\": {'add': [26], 'mod': [17, 22, 24, 25, 42, 43, 44, 45, 46]}, \"('ApplePodcastsIE', '_real_extract', 42)\": {'add': [52, 61], 'mod': [50, 56]}}}, {'path': 'yt_dlp/extractor/arte.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}, \"('ArteTVPlaylistIE', '_real_extract', 230)\": {'add': [255]}}}, {'path': 'yt_dlp/extractor/audiomack.py', 'status': 'modified', 'Loc': {\"('AudiomackIE', None, 16)\": {'add': [31]}}}, {'path': 'yt_dlp/extractor/bbc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}, \"('BBCIE', None, 604)\": {'add': [786, 791], 'mod': [796, 797, 799, 800, 801]}, \"('BBCCoUkIE', None, 39)\": {'mod': [41]}, \"('BBCCoUkIE', '_process_media_selector', 363)\": {'mod': [397, 398, 399]}, \"('BBCIE', '_real_extract', 906)\": {'mod': [1174, 1175, 1176]}, \"('BBCIE', 'parse_media', 1206)\": {'mod': [1217]}}}, {'path': 'yt_dlp/extractor/bigo.py', 'status': 'modified', 'Loc': {\"('BigoIE', '_real_extract', 30)\": {'add': [36], 'mod': [39, 47]}}}, {'path': 'yt_dlp/extractor/extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [70, 93, 308]}}}, {'path': 'yt_dlp/extractor/nuvid.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [8]}, \"('NuvidIE', None, 15)\": {'add': [22, 25, 30, 48], 'mod': [29]}, \"('NuvidIE', '_real_extract', 53)\": {'mod': [58, 59, 60, 61, 62, 63, 64, 70]}}}, {'path': 'yt_dlp/extractor/rutv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}, \"('RUTVIE', '_real_extract', 126)\": {'mod': [182]}}}, {'path': 'yt_dlp/extractor/streamcz.py', 'status': 'modified', 'Loc': {\"('StreamCZIE', None, 14)\": {'add': [24], 'mod': [34]}}}, {'path': 'yt_dlp/extractor/tele5.py', 'status': 'modified', 'Loc': {\"('Tele5IE', None, 12)\": {'add': [30], 'mod': [16, 45, 67, 68, 70, 71, 73, 74, 75, 76, 78, 80, 81, 82, 83, 84, 86, 87, 88, 90, 91, 92, 93, 94, 95, 97, 98, 99, 101, 102, 104, 105, 106, 107, 108]}, '(None, None, None)': {'mod': [4, 6, 7, 8, 10, 11, 12]}}}, {'path': 'yt_dlp/extractor/tv2dk.py', 'status': 'modified', 'Loc': {\"('TV2DKIE', '_real_extract', 79)\": {'add': [98], 'mod': [94]}, \"('TV2DKIE', None, 16)\": {'mod': [44, 45]}}}, {'path': 'yt_dlp/extractor/uol.py', 'status': 'modified', 'Loc': {\"('UOLIE', '_real_extract', 67)\": {'mod': [98]}}}, {'path': 'yt_dlp/extractor/urplay.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6, 7]}, \"('URPlayIE', None, 16)\": {'add': [28], 'mod': [26, 53, 54, 55, 56, 57]}, \"('URPlayIE', '_real_extract', 54)\": {'add': [113], 'mod': [75, 76, 77, 78, 79, 101]}}}, {'path': 'yt_dlp/extractor/videa.py', 'status': 'modified', 'Loc': {\"('VideaIE', '_real_extract', 112)\": {'mod': [149, 166, 167, 168]}}}, {'path': 'yt_dlp/extractor/vimeo.py', 'status': 'modified', 'Loc': {\"('VimeoIE', None, 297)\": {'add': [638]}}}, {'path': 'yt_dlp/extractor/wdr.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12, 24]}, \"('WDRIE', None, 25)\": {'add': [43], 'mod': [31, 39]}, \"('WDRPageIE', None, 139)\": {'add': [209, 234], 'mod': [173, 175, 177, 186, 194, 197, 248]}, \"('WDRPageIE', '_real_extract', 258)\": {'add': [273], 'mod': [293, 295, 296, 299, 300, 301, 302]}, \"('WDRElefantIE', '_real_extract', 324)\": {'add': [336]}, \"('WDRIE', '_real_extract', 47)\": {'mod': [129, 130, 132]}}}, {'path': 'yt_dlp/extractor/zdf.py', 'status': 'modified', 'Loc': {\"('ZDFIE', None, 136)\": {'add': [138], 'mod': [198, 199, 200, 201, 202, 203, 204]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/extractors.py", "yt_dlp/extractor/streamcz.py", "yt_dlp/extractor/bbc.py", "yt_dlp/extractor/zdf.py", "yt_dlp/extractor/tv2dk.py", "yt_dlp/extractor/rutv.py", "yt_dlp/extractor/aliexpress.py", "yt_dlp/extractor/wdr.py", "yt_dlp/extractor/videa.py", "yt_dlp/extractor/nuvid.py", "yt_dlp/extractor/arte.py", "yt_dlp/extractor/vimeo.py", "yt_dlp/extractor/urplay.py", "yt_dlp/extractor/bigo.py", "yt_dlp/YoutubeDL.py", "yt_dlp/extractor/applepodcasts.py", "yt_dlp/extractor/tele5.py", "yt_dlp/extractor/uol.py", "yt_dlp/extractor/audiomack.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "80e8493ee7c3083f4e215794e4a67ba5265f24f7", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2885", "iss_label": "site-request\npatch-available", "title": "Add Filmarkivet.se as a Supported Site", "body": "### Checklist\n\n- [X] I'm reporting a new site support request\n- [X] I've verified that I'm running yt-dlp version **2022.02.04**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required\n\n### Region\n\nUnited States\n\n### Example URLs\n\nhttps://www.filmarkivet.se/movies/paris-d-moll/\n\n### Description\n\nPlease add Filmarkivet.se as a supported site. I already watched the YouTube video \"The Secret Logos Of SF Studios (1919 - 1999)\" by CCGFilms, which has some SF Studios logos. I need to capture its logos.\n\n### Verbose log\n\n```shell\n[debug] Command-line config: ['-v', 'https://www.filmarkivet.se/movies/paris-d-moll/']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8 (No ANSI), err utf-8 (No ANSI), pref cp1252\r\n[debug] yt-dlp version 2022.02.04 [c1653e9] (win_exe)\r\n[debug] Python version 3.8.10 (CPython 64bit) - Windows-7-6.1.7601-SP1\r\n[debug] exe versions: ffmpeg N-105662-ge534d98af3-20220217 (setts), ffprobe N-105038-g30322ebe3c-sherpya\r\n[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[debug] [generic] Extracting URL: https://www.filmarkivet.se/movies/paris-d-moll/\r\n[generic] paris-d-moll: Requesting header\r\nWARNING: [generic] Falling back on generic information extractor.\r\n[generic] paris-d-moll: Downloading webpage\r\nWARNING: [generic] URL could be a direct video link, returning it as such.\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] paris-d-moll: Downloading 1 format(s): 0\r\n[debug] Invoking downloader on \"https://www.filmarkivet.se/movies/paris-d-moll/\"\r\n[download] Destination: paris-d-moll [paris-d-moll].unknown_video\r\n[download] 100% of 373.64KiB in 00:01\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/80e8493ee7c3083f4e215794e4a67ba5265f24f7", "file_loc": "{'base_commit': '80e8493ee7c3083f4e215794e4a67ba5265f24f7', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {\"('GenericIE', None, 143)\": {'add': [2529]}}}, {'path': 'yt_dlp/utils.py', 'status': 'modified', 'Loc': {\"(None, 'is_html', 3283)\": {'add': [3292], 'mod': [3294, 3295, 3296, 3297, 3298]}, '(None, None, None)': {'mod': [3300]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/utils.py", "yt_dlp/extractor/generic.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "5da08bde9e073987d1aae2683235721e4813f9c6", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/5424", "iss_label": "site-enhancement", "title": "[VLIVE.TV] Extract release timestamp", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nIs there a way to change `upload_date` from UTC to a specific GMT? This video (https://www.vlive.tv/post/1-18318601) was posted on Nov. 28, 2018 KST (Korean Standard Time) but yt-dlp downloads it as 20181127.\r\n\r\nI know you can prefer not to use UTC for YouTube videos but don't know how for other sites.\r\n\r\nHere is my command:\r\n`!yt-dlp -vU --embed-metadata --embed-thumbnail --merge-output-format \"mkv/mp4\" --write-subs --sub-langs all,-live_chat --embed-subs --compat-options no-keep-subs \"https://www.vlive.tv/post/1-18318601\" -o \"%(upload_date)s - %(creator)s - %(title)s.%(ext)s\" -P \"/content/drive/Shareddrives/VLIVE\" -P temp:\"/content/drive/Shareddrives/VLIVE/!temp\"`\r\n\r\nThanks!\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', '--embed-metadata', '--embed-thumbnail', '--merge-output-format', 'mkv/mp4', '--write-subs', '--sub-langs', 'all,-live_chat', '--embed-subs', '--compat-options', 'no-keep-subs', 'https://www.vlive.tv/post/1-18318601', '-o', '%(upload_date)s - %(creator)s - %(title)s.%(ext)s', '-P', '/content/drive/Shareddrives/VLIVE', '-P', 'temp:/content/drive/Shareddrives/VLIVE/!temp']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8\r\n[debug] yt-dlp version 2022.10.04 [4e0511f27]\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Compatibility options: no-keep-subs\r\n[debug] Python 3.7.15 (CPython 64bit) - Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic (glibc 2.26)\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] exe versions: ffmpeg 3.4.11, ffprobe 3.4.11\r\n[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1706 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.10.04, Current version: 2022.10.04\r\nyt-dlp is up to date (2022.10.04)\r\n[debug] [vlive:post] Extracting URL: https://www.vlive.tv/post/1-18318601\r\n[vlive:post] 1-18318601: Downloading post JSON metadata\r\n[debug] [vlive] Extracting URL: http://www.vlive.tv/video/101216\r\n[vlive] 101216: Downloading officialVideoPost JSON metadata\r\n[vlive] 101216: Downloading inkey JSON metadata\r\n[vlive] 101216: Downloading JSON metadata\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[info] 101216: Downloading subtitles: en_US, es_PA, es_ES, fr_FR, in_ID, pt_PT, vi_VN, jp, zh_CN, ko_KR\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] 101216: Downloading 1 format(s): avc1_720P\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_06_3/aa339b4a-f89e-11e8-bc80-3ca82a21f531-1544022097899_en_US_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt\r\n[download] 100% of 55.30KiB in 00:00:00 at 154.65KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3b1a6fdd-f30e-11e8-8111-3ca82a220799-1543410308156_es_PA_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt\r\n[download] 100% of 24.65KiB in 00:00:00 at 317.51KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/2020_11_16/a3003ab1-27e4-11eb-9a2e-0050569c085d-1605514850566_es_ES_fan.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt\r\n[download] 100% of 59.17KiB in 00:00:00 at 197.86KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/2020_12_14/8d2ea1db-3dcf-11eb-9b2a-0050569c085d-1607924720110_fr_FR_fan.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt\r\n[download] 100% of 55.22KiB in 00:00:00 at 528.98KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3ace9972-f30e-11e8-8606-3ca82a22c1e9-1543410307659_in_ID_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt\r\n[download] 100% of 24.42KiB in 00:00:00 at 157.16KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3b090ad1-f30e-11e8-9c04-3ca82a225339-1543410308041_pt_PT_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt\r\n[download] 100% of 25.00KiB in 00:00:00 at 267.88KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_07_4/dac66573-f9fc-11e8-98b0-3ca82a22d7a5-1544172503245_vi_VN_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt\r\n[download] 100% of 64.32KiB in 00:00:00 at 555.77KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3aee2f6b-f30e-11e8-bb16-3ca82a21e509-1543410307868_jp_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt\r\n[download] 100% of 23.29KiB in 00:00:00 at 258.58KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_10_4/077db2f1-fc58-11e8-8818-3ca82a22c1e9-1544431564794_zh_CN_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt\r\n[download] 100% of 52.85KiB in 00:00:00 at 581.89KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_06_2/cc2e3feb-f921-11e8-8285-3ca82a2243c9-1544078418977_ko_KR_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt\r\n[download] 100% of 64.29KiB in 00:00:00 at 450.71KiB/s\r\n[info] Downloading video thumbnail 1 ...\r\n[info] Writing video thumbnail 1 to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.png\r\n[download] /content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4 has already been downloaded\r\n[EmbedSubtitle] Embedding subtitles in \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\r\n[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt' -map 0 -dn -ignore_unknown -c copy -c:s mov_text -map -0:s -map 1:0 -metadata:s:s:0 language=eng -map 2:0 -metadata:s:s:1 language=spa -map 3:0 -metadata:s:s:2 language=spa -map 4:0 -metadata:s:s:3 language=fra -map 5:0 -metadata:s:s:4 language=ind -map 6:0 -metadata:s:s:5 language=por -map 7:0 -metadata:s:s:6 language=vie -map 8:0 -metadata:s:s:7 language=jp -map 9:0 -metadata:s:s:8 language=zho -map 10:0 -metadata:s:s:9 language=kor -movflags +faststart 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.temp.mp4'\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt (pass -k to keep)\r\n[Metadata] Adding metadata to \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\r\n[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=\u2665\ub3c4\uc694\uc77c\u2665 12/1 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601' -metadata date=20181127 -metadata purl=http://www.vlive.tv/video/101216 -metadata comment=http://www.vlive.tv/video/101216 -metadata 'artist=NCT\uc758 night night!' -movflags +faststart 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.temp.mp4'\r\n[EmbedThumbnail] mutagen: Adding thumbnail to \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/HHeroin/yt-dlp/commit/5da08bde9e073987d1aae2683235721e4813f9c6", "file_loc": "{'base_commit': '5da08bde9e073987d1aae2683235721e4813f9c6', 'files': [{'path': 'yt_dlp/extractor/vlive.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15]}, \"('VLiveIE', None, 69)\": {'add': [83, 100]}, \"('VLiveIE', '_real_extract', 148)\": {'add': [171]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/vlive.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "51c22ef4e2af966d6100d0d97d9e8019022df8ad", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2996", "iss_label": "bug", "title": "'<' not supported between instances of 'float' and 'str' and --throttled-rate error after update?", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Description\n\nAfter update I get error\r\n\r\n'<' not supported between instances of 'float' and 'str'\r\n\r\nI found out that it is somewhat related to --throttled-rate setting? When I remove it I can download from YT no issues\r\nIf I leave it, I get the following message\r\n\r\n[download] 0.0% of 714.94MiB at 499.98KiB/s ETA 24:24ERROR: '<' not supported between instances of 'float' and 'str'\r\n\n\n### Verbose log\n\n```shell\nMicrosoft Windows [Version 6.1.7601]\r\n\r\n>yt-dlp https://www.youtube.com/watch?v=XUp9pe1T-UE --throttled-rate 999k\r\n[youtube] XUp9pe1T-UE: Downloading webpage\r\n[youtube] XUp9pe1T-UE: Downloading android player API JSON\r\n[info] XUp9pe1T-UE: Downloading 1 format(s): 571+251\r\nWARNING: Requested formats are incompatible for merge and will be merged into mkv\r\n[download] Destination: 8k VIDEOS _ Beauty of Nature 8K (60 FPS) HDR UltraHD _ Sony Demo [XUp9pe1T-UE].f571.mp4\r\n[download] 0.0% of 505.86MiB at 90.90KiB/s ETA 01:34:58ERROR: '<' not supported between instances of 'float' and 'str'\r\n\r\nyt>\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/51c22ef4e2af966d6100d0d97d9e8019022df8ad", "file_loc": "{'base_commit': '51c22ef4e2af966d6100d0d97d9e8019022df8ad', 'files': [{'path': 'yt_dlp/__init__.py', 'status': 'modified', 'Loc': {\"(None, 'validate_options', 156)\": {'mod': [258]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "6f638d325e1878df304822c6bf4e231e06dae89a", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/3467", "iss_label": "docs/meta/cleanup\nhigh-priority\nregression", "title": "Error since commit 43cc91a", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.04.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Description\n\nAfter commit 43cc91a, I get the error shown in the verbose log.\n\n### Verbose log\n\n```shell\nyt-dlp -Uv\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/bin/yt-dlp/__main__.py\", line 13, in \r\n File \"\", line 259, in load_module\r\n File \"/usr/local/bin/yt-dlp/yt_dlp/__init__.py\", line 12, in \r\nModuleNotFoundError: No module named 'yt_dlp.compat'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/6f638d325e1878df304822c6bf4e231e06dae89a", "file_loc": "{'base_commit': '6f638d325e1878df304822c6bf4e231e06dae89a', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, 61)': {'add': [61]}, '(None, None, 64)': {'mod': [64]}, '(None, None, 68)': {'mod': [68]}, '(None, None, 70)': {'mod': [70]}}}, {'path': 'yt_dlp/extractor/anvato.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7], 'mod': [22, 23, 24, 25, 26, 27, 28, 29, 30]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/anvato.py"], "doc": [], "test": [], "config": ["Makefile"], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "14a086058a30a0748b5b716e9b21481f993518f3", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/1601", "iss_label": "site-bug", "title": "ARD:mediathek doesn't work anymore", "body": "### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2021.10.22**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Description\n\nDownloading from ARDmediathek dosen\u2019t work anymore\n\n### Verbose log\n\n```shell\n$ /repositories/yt-dlp/yt-dlp --no-config --verbose https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/\r\n[debug] Command-line config: ['--no-config', '--verbose', 'https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2021.10.22 (zip)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']\r\n[debug] Python version 3.9.7 (CPython 64bit) - Linux-5.13.0-21-generic-x86_64-with-glibc2.34\r\n[debug] exe versions: ffmpeg 4.4 (setts), ffprobe 4.4, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite\r\n[debug] Proxy map: {}\r\n[debug] Using fake IP 53.36.205.78 (DE) as X-Forwarded-For\r\n[debug] [ARD:mediathek] Extracting URL: https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/\r\n[ARD:mediathek] Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll: Downloading webpage\r\n[ARD:mediathek] 10049223: Downloading media JSON\r\nERROR: [ARD:mediathek] Unable to download JSON metadata: HTTP Error 404: Not Found (caused by ); please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; type yt-dlp -U to update. Be sure to call yt-dlp with the --verbose flag and include its complete output.\r\n File \"/repositories/yt-dlp/yt-dlp/yt_dlp/extractor/common.py\", line 713, in _request_webpage\r\n return self._downloader.urlopen(url_or_request)\r\n File \"/repositories/yt-dlp/yt-dlp/yt_dlp/YoutubeDL.py\", line 3288, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 632, in http_response\r\n response = self.parent.error(\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 555, in error\r\n result = self._call_chain(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 494, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 747, in http_error_302\r\n return self.parent.open(new, timeout=req.timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 632, in http_response\r\n response = self.parent.error(\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 561, in error\r\n return self._call_chain(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 494, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 641, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/14a086058a30a0748b5b716e9b21481f993518f3", "file_loc": "{'base_commit': '14a086058a30a0748b5b716e9b21481f993518f3', 'files': [{'path': 'yt_dlp/extractor/ard.py', 'status': 'modified', 'Loc': {\"('ARDBetaMediathekIE', None, 390)\": {'add': [405, 428], 'mod': [391]}, \"('ARDBetaMediathekIE', '_ARD_extract_playlist', 512)\": {'mod': [528, 529, 530, 531, 532, 533, 534, 536, 537, 538, 539, 540, 541]}, \"('ARDBetaMediathekIE', '_real_extract', 551)\": {'mod': [577]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/ard.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "ab7d4f784892c275e888d71aa80a3a2ed59d9b83", "is_iss": 0, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/2019", "iss_label": "", "title": "[Bug] text of collapsed node still present ", "body": "On latest commit https://github.com/comfyanonymous/ComfyUI/commit/d66b631d74e6f6ac95c61c63d4a0da150bf74903.\r\nDragging the node also doesn't do anything until it's uncollapsed.\r\n\"Screenshot\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/comfyanonymous/ComfyUI/commit/ab7d4f784892c275e888d71aa80a3a2ed59d9b83", "file_loc": "{'base_commit': 'ab7d4f784892c275e888d71aa80a3a2ed59d9b83', 'files': [{'path': 'web/scripts/domWidget.js', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [235, 292]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["web/scripts/domWidget.js"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "3e589bf1356024fb471a9d17738e4626f21a953b", "is_iss": 0, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/1153", "iss_label": "bug\ntriage", "title": "Azure Deployment Name Bug", "body": "## Policy and info\r\n - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.\r\n - Adding the label \"sweep\" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/\r\n\r\n## Expected Behavior\r\n\r\nThere shouldn't be an error with the model name.\r\n\r\n## Current Behavior\r\n\r\n### Deployment name seems to mix with model name.\r\n\r\nEverything seems to work perfectly and code is being made:\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/9fd4fcf5-d78e-4179-9406-a98867a9dfc1)\r\n\r\nBut then an error pops up telling me that the model doesn't exist and it takes my Azure OpenAI deployment name and says it's not a model.\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/de5d275e-aa79-4d55-899e-ecf87d7a4261)\r\n\r\nHere is the command style I used following these instructions from here: https://gpt-engineer.readthedocs.io/en/latest/open_models.html\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/987113ca-0616-4a38-9f35-ccec2cebda5d)\r\n\r\n`gpt-engineer --azure [redacted_endpoint_url] ./snake_game/ [redacted_deployment_name]`\r\n\r\n\r\n## Additional Failure Information\r\n\r\nUsing Azure OpenAI with gpt-4-turbo deployed with a different deployment name. Only installed gpt-engineer in a virtual environment.", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1170", "commit_html_url": null, "file_loc": "{'base_commit': '3e589bf1356024fb471a9d17738e4626f21a953b', 'files': [{'path': '.github/CONTRIBUTING.md', 'status': 'modified', 'Loc': {'(None, None, 114)': {'add': [114]}}}, {'path': 'gpt_engineer/core/ai.py', 'status': 'modified', 'Loc': {\"('AI', '_create_chat_model', 330)\": {'mod': [349]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/core/ai.py"], "doc": [".github/CONTRIBUTING.md"], "test": [], "config": [], "asset": []}} -{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "c4c1203fc07b2e23c3e5a5e9277266a711ab9466", "is_iss": 0, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/35", "iss_label": "", "title": ".py files are not being created. I just get all_output.txt that I manually have to create from.", "body": "Hi, I absolutely love this script. This is the most accurate auto-GPT development script I have tried yet, it's so powerful!\r\n\r\nIn the demo video it shows the script creating each of the development files, in my case .py files within the workspace folder automatically. My build isn't doing this I just get an all_output.txt file with all .py files codes in one place and a single python file.\r\n\r\nHow do I ensure that GPT-Engineer automatically creates the .py files for me. Thanks", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/120", "commit_html_url": null, "file_loc": "{'base_commit': 'c4c1203fc07b2e23c3e5a5e9277266a711ab9466', 'files': [{'path': 'gpt_engineer/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, \"(None, 'parse_chat', 6)\": {'add': [11], 'mod': [6, 7, 8, 10, 13, 14, 15, 16, 17, 18, 19]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/chat_to_files.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "7b91676a0c2ccd4589a42f2cadbf1e69f93ad81b", "is_iss": 0, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/1128", "iss_label": "bug\ntriage", "title": "Applying diffs failing silently ", "body": "## Expected Behavior\r\n\r\nI would expect GPT engineer to either successfully apply all diffs sent by the AI or fail in a way that lets you know which diffs have been applied, which failed, and allows you to manually salvage the failed diff parts by copy and pasting \r\n\r\n## Current Behavior\r\n\r\nThe current behaviour seems to be that it applies the sections of the diff which it can and silently throws the rest of the code away. From a users perspective it seems like everything has gone well - but in reality its only applied a portion of the diff. \r\n\r\nThis is really bad from a usability perspective - for one, a partially applied diff is obviously never going to be working code so applying it is pointless. Also, the knowledge that this is the behaviour pf gpte means i need to manually check every single output to verify its applied the whole diff which is a complete waste of time for diffs which do apply succesfully. \r\n\r\nNot applying any of the diffs at all would actually be a better outcome for me, as at least i would have a consistent workflow of copy and pasting... however a more sensible sollution is applying the diffs it can, and if it cant apply a diff for a file, not apply any change to it at all, and instead providing an error output which is convenient for the use to copy and paste manually into the file \r\n\r\n### Failure Logs\r\nI cant upload failure logs as the code im working on is sensitive", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1138", "commit_html_url": null, "file_loc": "{'base_commit': '7b91676a0c2ccd4589a42f2cadbf1e69f93ad81b', 'files': [{'path': 'gpt_engineer/core/diff.py', 'status': 'modified', 'Loc': {\"('Diff', 'validate_and_correct', 340)\": {'mod': [357]}}}, {'path': 'tests/core/test_salvage_correct_hunks.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/core/diff.py"], "doc": [], "test": ["tests/core/test_salvage_correct_hunks.py"], "config": [], "asset": []}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "f7bb578a1409b1f96aff534ff5ed2bd10502296f", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1527", "iss_label": "", "title": "Add copy to clipboard in plaintext for image details", "body": "Add copy to clipboard in plaintext for image details\r\n\r\nA button we can click to copy to clipboard all of the image details shown in the log output file. If not on the log page then on the app itself.\r\n\r\nThe quick copying of these settings enables us to share our work methods with others in the community more smoothly, thereby assisting them in a more efficient and effective way.\r\n\r\n![chrome_ybBu4Zoryf](https://github.com/lllyasviel/Fooocus/assets/57927413/a1ad7fa5-5a99-43e5-8420-e2c4aeb055de)\r\n\r\n![chrome_6zslF9Z3UD](https://github.com/lllyasviel/Fooocus/assets/57927413/ded36d98-377a-4130-b20f-01defbee1e6b)\r\n\r\nWhen I copy the text manually from the log file it looks like a garbled mess. See example below.\r\n\r\n```\r\nPrompt | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body\r\n-- | --\r\nNegative Prompt | \u00a0\r\nFooocus V2 Expansion | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body, intricate, elegant, highly detailed, sharp focus, illuminated, sunny, magical, scenic, artistic, true colors, deep aesthetic, very inspirational, cute, cozy, inspired, original, fine detail, professional, winning, enhanced, polished\r\nStyles | ['SAI Photographic', 'Fooocus V2', 'Artstyle Hyperrealism', 'MRE Artistic Vision']\r\nPerformance | Quality\r\nResolution | (1024, 1024)\r\nSharpness | 3\r\nGuidance Scale | 1.7\r\nADM Guidance | (1.5, 0.8, 0.3)\r\nBase Model | dreamshaperXL_turboDpmppSDEKarras.safetensors\r\nRefiner Model | None\r\nRefiner Switch | 0.5\r\nSampler | dpmpp_sde\r\nScheduler | karras\r\nSeed | 5044578018584347060\r\nVersion | v2.1.853\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/lllyasviel/Fooocus/commit/f7bb578a1409b1f96aff534ff5ed2bd10502296f", "file_loc": "{'base_commit': 'f7bb578a1409b1f96aff534ff5ed2bd10502296f', 'files': [{'path': 'fooocus_version.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}, {'path': 'modules/async_worker.py', 'status': 'modified', 'Loc': {\"(None, 'handler', 116)\": {'mod': [400, 401, 780, 782]}}}, {'path': 'modules/private_logger.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, \"(None, 'log', 21)\": {'add': [38, 61], 'mod': [42, 60]}}}, {'path': 'update_log.md', 'status': 'modified', 'Loc': {'(None, None, 1)': {'add': [0]}}}, {'path': 'webui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 14, 111, 512], 'mod': [103]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/private_logger.py", "webui.py", "modules/async_worker.py", "fooocus_version.py"], "doc": ["update_log.md"], "test": [], "config": [], "asset": []}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/2561", "iss_label": "enhancement", "title": "[Feature Request]: Prompt embedded LoRAs", "body": "### Is there an existing issue for this?\r\n\r\n- [x] I have searched the existing issues and checked the recent builds/commits\r\n\r\n### What would your feature do?\r\n\r\nSimilar to how A1111 handles LoRAs by default, I believe there should be an option to embed LoRAs in the prompt by using the following structure:\r\n```csharp\r\n\r\n```\r\n\r\nThe current workflow works well, but has a few limitations, namely being able to use wildcards and LoRAs together for more dynamic prompts. Additionally, this feature already exists for embeddings, so I reckon adding it for LoRAs should be trivial.\r\n\r\n### Proposed workflow\r\n\r\n1. Enter LoRAs in the prompt using the `` structure\r\n2. Generate images, and LoRAs are loaded for each iteration\r\n\r\n### Additional information\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/2323", "commit_html_url": null, "file_loc": "{'base_commit': '3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f', 'files': [{'path': 'modules/async_worker.py', 'status': 'modified', 'Loc': {\"(None, 'handler', 134)\": {'add': [435], 'mod': [155, 453, 454, 655, 865, 908, 912]}, \"(None, 'worker', 19)\": {'mod': [47, 50, 51, 72]}, \"(None, 'callback', 806)\": {'mod': [810]}}}, {'path': 'modules/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [23], 'mod': [11]}}}, {'path': 'modules/sdxl_styles.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5, 7, 12]}, \"(None, 'apply_wildcards', 68)\": {'mod': [68, 69, 70, 71, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 95]}, \"(None, 'get_words', 95)\": {'mod': [104]}}}, {'path': 'modules/util.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 16], 'mod': [1]}, \"(None, 'get_files_from_folder', 166)\": {'mod': [166, 167, 168, 170, 172, 173, 174, 175, 176, 177, 178, 179, 180, 182]}, \"('PromptStyle', None, 358)\": {'mod': [358]}, \"(None, 'get_enabled_loras', 396)\": {'mod': [397]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/async_worker.py", "modules/sdxl_styles.py", "modules/config.py", "modules/util.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "8e62a72a63b30a3067d1a1bc3f8d226824bd9283", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1671", "iss_label": "bug (AMD)", "title": "Cannot use image prompts", "body": "I am trying to use 2x images as an image prompt but when I press generate this is what I'm getting (I can generate just fine without image prompts):\r\n\r\nFull console log:\r\n\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 3\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 1.5\r\n[Parameters] Seed = 953753918774495193\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 6 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\nmodel_type EPS\r\nUNet ADM Dimension 2816\r\nUsing split attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing split attention in VAE\r\nextra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'}\r\nBase model loaded: H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['None', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors].\r\nRequested to load SDXLClipModel\r\nLoading 1 new model\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Image processing ...\r\nTraceback (most recent call last):\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 806, in worker\r\n handler(task)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 647, in handler\r\n task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\ip_adapter.py\", line 185, in preprocess\r\n cond = image_proj_model.model(cond).to(device=ip_adapter.load_device, dtype=ip_adapter.dtype)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 117, in forward\r\n latents = attn(x, latents) + latents\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 55, in forward\r\n latents = self.norm2(latents)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\normalization.py\", line 190, in forward\r\n return F.layer_norm(\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\functional.py\", line 2515, in layer_norm\r\n return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu!\r\nTotal time: 37.40 seconds\r\n\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1678", "commit_html_url": null, "file_loc": "{'base_commit': '8e62a72a63b30a3067d1a1bc3f8d226824bd9283', 'files': [{'path': 'extras/ip_adapter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10], 'mod': [5]}, \"(None, 'load_ip_adapter', 90)\": {'mod': [119, 120, 121, 122, 123, 124, 125, 126]}}}, {'path': 'fooocus_version.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fooocus_version.py", "extras/ip_adapter.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d57afc88a48359bc1642c2ae30a091f0426eff43", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1063", "iss_label": "", "title": "Faceswap crashes ", "body": "**Describe the problem**\r\nThe program crashes when trying to use an image as prompt and selecting the faceswap advanced option\r\n\r\n**Full Console Log**\r\nRequirement already satisfied: pygit2==1.12.2 in /usr/local/lib/python3.10/dist-packages (1.12.2)\r\nRequirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)\r\nRequirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.21)\r\n/content\r\nfatal: destination path 'Fooocus' already exists and is not an empty directory.\r\n/content/Fooocus\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py', '--preset', 'realistic', '--share']\r\nLoaded preset: /content/Fooocus/presets/realistic.json\r\nPython 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]\r\nFooocus version: 2.1.824\r\nRunning on local URL: http://127.0.0.1:7865/\r\nRunning on public URL: https://fb6371be5d9ced0c1d.gradio.live/\r\n\r\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)\r\nTotal VRAM 15102 MB, total RAM 12983 MB\r\n2023-11-29 21:03:50.202601: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-11-29 21:03:50.202658: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-11-29 21:03:50.202708: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-11-29 21:03:52.244376: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nSet vram state to: NORMAL_VRAM\r\nDisabling smart memory management\r\nDevice: cuda:0 Tesla T4 : native\r\nVAE dtype: torch.float32\r\nUsing pytorch cross attention\r\nRefiner unloaded.\r\nmodel_type EPS\r\nadm 2816\r\nUsing pytorch attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing pytorch attention in VAE\r\nextra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}\r\nBase model loaded: /content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors].\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25.\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25.\r\nFooocus V2 Expansion: Vocab with 642 words.\r\nFooocus Expansion engine loaded for cuda:0, use_fp16 = True.\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.30 seconds\r\nApp started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://fb6371be5d9ced0c1d.gradio.live/\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 604471590939558783\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 60 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Preparing Fooocus text #1 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full light, gorgeous, amazing, elegant, intricate, highly detailed, dynamic, rich deep vivid colors, beautiful, very inspirational, inspiring, thought, fancy, sharp focus, colorful, epic, professional, artistic, new, charismatic, cool, brilliant, awesome, attractive, shiny, fine detail, pretty, focused, creative\r\n[Fooocus] Preparing Fooocus text #2 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full pretty, attractive, fine detail, intricate, elegant, luxury, elite, dramatic light, highly detailed, cinematic, complex, sharp focus, illuminated, amazing, marvelous, thought, epic, fabulous, colorful, shiny, brilliant, symmetry, great, excellent composition, ambient, dynamic, vibrant colors, relaxed, beautiful\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus Model Management] Moving model(s) has taken 0.11 seconds\r\n[Fooocus] Encoding positive #2 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Encoding negative #2 ...\r\n[Parameters] Denoising Strength = 1.0\r\n[Parameters] Initial Latent shape: Image Space (1152, 896)\r\nPreparation time: 3.60 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.40 seconds\r\n100% 60/60 [00:55<00:00, 1.09it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 60.73 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.01 seconds\r\n100% 60/60 [00:56<00:00, 1.06it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 61.85 seconds\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.57 seconds\r\nTotal time: 131.21 seconds\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 7513856776859948774\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\nextra keys clip vision: ['vision_model.embeddings.position_ids']\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1710", "commit_html_url": null, "file_loc": "{'base_commit': 'd57afc88a48359bc1642c2ae30a091f0426eff43', 'files': [{'path': 'fooocus_colab.ipynb', 'status': 'modified', 'Loc': {'(None, None, 15)': {'mod': [15]}}}, {'path': 'readme.md', 'status': 'modified', 'Loc': {'(None, None, 127)': {'add': [127]}, '(None, None, 118)': {'mod': [118]}, '(None, None, 124)': {'mod': [124]}}}, {'path': 'ldm_patched/modules/args_parser.py', 'Loc': {'(None, None, None)': [99]}, 'base_commit': 'cca0ca704a713ab153938e78de6787609c723cad'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fooocus_colab.ipynb", "ldm_patched/modules/args_parser.py"], "doc": ["readme.md"], "test": [], "config": [], "asset": []}} -{"organization": "odoo", "repo_name": "odoo", "base_commit": "72ec0050b442214c9be93907fc01a48832243c15", "is_iss": 0, "iss_html_url": "https://github.com/odoo/odoo/issues/7306", "iss_label": "", "title": "[v8.0] Bank statement : Customer Import invoice wizard do not auto-fill the right field", "body": "Step to reproduce:\n\ncreate a customer invoice\ncreate a new bank statement and import this invoice\nclick on 'Reconcile'\nProblem: No match proposition between the bank statement line and the invoice move line can be found since the communication field is '/'. (The invoice number is in the field 'Reference' instead)\n\nSo please the ref must go to communication\n\nThanks\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/odoo/odoo/commit/72ec0050b442214c9be93907fc01a48832243c15", "file_loc": "{'base_commit': '72ec0050b442214c9be93907fc01a48832243c15', 'files': [{'path': 'addons/account/account_bank_statement.py', 'status': 'modified', 'Loc': {\"('account_bank_statement_line', 'get_reconciliation_proposition', 537)\": {'mod': [575]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["addons/account/account_bank_statement.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "197287fc303119bf71caf9b3f72280cab08da749", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1147", "iss_label": "", "title": "[Bug]: \u7ffb\u8bd1arxiv\u6587\u6863\u62a5\u9519\uff0c\u65e0\u8bba\u672c\u5730\u81ea\u5df1\u642d\u5efa\u8fd8\u662f\u5b98\u65b9\u5728\u7ebf\u5747\u62a5\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOneKeyInstall (\u4e00\u952e\u5b89\u88c5\u811a\u672c-windows)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n\u5b98\u65b9\u5728\u7ebf\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \"./toolbox.py\", line 165, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 249, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 141, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \"./toolbox.py\", line 507, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"/usr/lib/python3.8/tarfile.py\", line 1608, in open\r\n> raise ReadError(\"file could not be opened successfully\")\r\n> tarfile.ReadError: file could not be opened successfully\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://localhost:7890, \u4ee3\u7406\u6240\u5728\u5730\uff1aJapan\r\n\r\n\u672c\u5730\u642d\u5efa\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> [Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \".\\toolbox.py\", line 150, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 250, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 139, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \".\\toolbox.py\", line 461, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"D:\\academic-gpt\\installer_files\\env\\lib\\tarfile.py\", line 1811, in open\r\n> raise ReadError(f\"file could not be opened successfully:\\n{error_msgs_summary}\")\r\n> tarfile.ReadError: file could not be opened successfully:\r\n> - method gz: ReadError('invalid header')\r\n> - method bz2: ReadError('not a bzip2 file')\r\n> - method xz: ReadError('not an lzma file')\r\n> - method tar: ReadError('invalid header')\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://127.0.0.1:12341, \u4ee3\u7406\u6240\u5728\u5730\uff1aHong Kong - Cloudflare, Inc.\r\n\r\n\u6240\u7ffb\u8bd1\u7684arxiv\u6587\u6863\u7684\u5730\u5740\u4e3a\uff1ahttps://arxiv.org/abs/2112.10551\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![msedge_HmO7O9M6OT](https://github.com/binary-husky/gpt_academic/assets/10786234/51e6ff95-9b95-47cd-b671-322aa1808389)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/197287fc303119bf71caf9b3f72280cab08da749", "file_loc": "{'base_commit': '197287fc303119bf71caf9b3f72280cab08da749', 'files': [{'path': 'shared_utils/handle_upload.py', 'status': 'modified', 'Loc': {\"(None, 'extract_archive', 91)\": {'mod': [107, 108, 109, 110, 111, 112, 113, 114, 116, 117]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["shared_utils/handle_upload.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "65317e33af87640b68c84c9f6ee67188b76c6d7a", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/558", "iss_label": "", "title": "\u80fd\u5426\u5229\u7528EdgeGPT\uff0c\u652f\u6301\u8c03\u7528\u5fae\u8f6fBing\u63a5\u53e3", "body": "\u5927\u4f6c\u4eec\u6c42\u6c42\u4e86\uff0c\u770b\u770b\u8fd9\u4e2a\u9879\u76ee\u5427\uff0chttps://github.com/acheong08/EdgeGPT\r\n\u5982\u679c\u53ef\u4ee5\u65b9\u4fbf\u5730\u8c03\u7528Bing\u63a5\u53e3\uff0c\u6216\u8005\u672a\u6765\u7684\u767e\u5ea6\u3001\u963f\u91cc\u7b49\u7b2c\u4e09\u65b9\u63a5\u53e3\uff0c\u5bf9\u4e8e\u6ca1\u6709openAI-key\u4e5f\u6ca1\u6cd5\u672c\u5730\u90e8\u7f72GLM\u7684\u540c\u5b66\u662f\u798f\u97f3\u554a", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/65317e33af87640b68c84c9f6ee67188b76c6d7a", "file_loc": "{'base_commit': '65317e33af87640b68c84c9f6ee67188b76c6d7a', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [65], 'mod': [47, 48]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21, 119]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e359fff0405c4cb865b809b4ecfc0a95a54d2512", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1554", "iss_label": "", "title": "[Bug]: docker\u5b89\u88c5\u7248\u672c\u9002\u914dspark api\u62a5\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nDocker-Compose\uff08Windows/Mac\uff09\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u5728mac\u672c\u5730\u4f7f\u7528conda\u5b89\u88c5\u65b9\u5f0f\uff0c\u9002\u914dspark api\u53ef\u4ee5\u6b63\u5e38\u8fd0\u884c\u3002\u4f46\u662f\u901a\u8fc7docker compose\u65b9\u5f0f\u5b89\u88c5\u4e4b\u540e\u901a\u8fc7spark api\u4f1a\u51fa\u73b0\u62a5\u9519\uff0c\u4e0d\u8fc7\u5343\u5e06api\u5219\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n\r\n\"Snipaste_2024-02-14_21-12-27\"\r\n\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\ngpt_academic_nolocalllms-1 | error: Connection to remote host was lost.\r\ngpt_academic_nolocalllms-1 | Exception ignored in thread started by: .run at 0x2aaaf7fdfa60>\r\ngpt_academic_nolocalllms-1 | Traceback (most recent call last):\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/com_sparkapi.py\", line 113, in run\r\ngpt_academic_nolocalllms-1 | ws.send(data)\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/websocket/_app.py\", line 284, in send\r\ngpt_academic_nolocalllms-1 | raise WebSocketConnectionClosedException(\"Connection is already closed.\")\r\ngpt_academic_nolocalllms-1 | websocket._exceptions.WebSocketConnectionClosedException: Connection is already closed.\r\ngpt_academic_nolocalllms-1 | Traceback (most recent call last):\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/routes.py\", line 422, in run_predict\r\ngpt_academic_nolocalllms-1 | output = await app.get_blocks().process_api(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/blocks.py\", line 1323, in process_api\r\ngpt_academic_nolocalllms-1 | result = await self.call_function(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/blocks.py\", line 1067, in call_function\r\ngpt_academic_nolocalllms-1 | prediction = await utils.async_iteration(iterator)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 336, in async_iteration\r\ngpt_academic_nolocalllms-1 | return await iterator.__anext__()\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 329, in __anext__\r\ngpt_academic_nolocalllms-1 | return await anyio.to_thread.run_sync(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/to_thread.py\", line 56, in run_sync\r\ngpt_academic_nolocalllms-1 | return await get_async_backend().run_sync_in_worker_thread(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 2134, in run_sync_in_worker_thread\r\ngpt_academic_nolocalllms-1 | return await future\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 851, in run\r\ngpt_academic_nolocalllms-1 | result = context.run(func, *args)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 312, in run_sync_iterator_async\r\ngpt_academic_nolocalllms-1 | return next(iterator)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/gpt/toolbox.py\", line 115, in decorated\r\ngpt_academic_nolocalllms-1 | yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/bridge_all.py\", line 765, in predict\r\ngpt_academic_nolocalllms-1 | yield from method(inputs, llm_kwargs, *args, **kwargs)\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/bridge_spark.py\", line 60, in predict\r\ngpt_academic_nolocalllms-1 | if response == f\"[Local Message] \u7b49\u5f85{model_name}\u54cd\u5e94\u4e2d ...\":\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^\r\ngpt_academic_nolocalllms-1 | UnboundLocalError: cannot access local variable 'response' where it is not associated with a value\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/e359fff0405c4cb865b809b4ecfc0a95a54d2512", "file_loc": "{'base_commit': 'e359fff0405c4cb865b809b4ecfc0a95a54d2512', 'files': [{'path': 'request_llms/bridge_qianfan.py', 'status': 'modified', 'Loc': {\"(None, 'predict', 135)\": {'add': [148, 151], 'mod': [161, 162, 163, 164, 165, 166]}}}, {'path': 'request_llms/bridge_qwen.py', 'status': 'modified', 'Loc': {\"(None, 'predict', 25)\": {'add': [53]}}}, {'path': 'request_llms/bridge_skylark2.py', 'status': 'modified', 'Loc': {\"(None, 'predict', 32)\": {'add': [58]}}}, {'path': 'request_llms/bridge_spark.py', 'status': 'modified', 'Loc': {\"(None, 'predict', 36)\": {'add': [54]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_qwen.py", "request_llms/bridge_qianfan.py", "request_llms/bridge_skylark2.py", "request_llms/bridge_spark.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "c17fc2a9b55b1c7447718a06a3eac4378828bb22", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1021", "iss_label": "waiting feedback", "title": "[Feature]: \u901a\u4e49\u5343\u95ee\u7684\u6a21\u578b\u5f00\u6e90\u4e86,\u5efa\u8bae\u52a0\u5165.", "body": "### Class | \u7c7b\u578b\n\nNone\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\n\u9644\uff1a\u5f00\u6e90\u5730\u5740\r\n\r\n\u9b54\u642dModelScope\uff1a\r\n\r\nhttps://modelscope.cn/models/qwen/Qwen-7B/summary\r\n\r\nhttps://modelscope.cn/models/qwen/Qwen-7B-Chat/summary\r\n\r\nHugging Face\uff1ahttps://huggingface.co/Qwen\r\n\r\nGitHub\uff1ahttps://github.com/QwenLM/Qwen-7B", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/c17fc2a9b55b1c7447718a06a3eac4378828bb22", "file_loc": "{'base_commit': 'c17fc2a9b55b1c7447718a06a3eac4378828bb22', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [337]}}}, {'path': 'request_llm/bridge_qwen.py', 'status': 'modified', 'Loc': {\"('GetONNXGLMHandle', 'load_model_and_tokenizer', 26)\": {'mod': [35, 37, 38, 39, 40]}, \"('GetONNXGLMHandle', None, 19)\": {'mod': [43, 57, 58]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "request_llm/bridge_qwen.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "19bd0c35ed05e6f99c8e3c0a8c994b1385341cae", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1053", "iss_label": "ToDo", "title": "[Bug]: \u672c\u5730\u7ffb\u8bd1Latex\u51fa\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n* \u95ee\u9898\uff1a\u627e\u4e0d\u5230\u6240\u8c13\u7684\u201cfp\u201d\uff08\u6587\u4ef6\u6307\u9488\uff09\r\n![image](https://github.com/binary-husky/gpt_academic/assets/62052010/40949aca-4ebf-4b24-ade6-a8423654b228)\r\n\r\n\r\n* stack\uff1a\u8fd9\u91cc\u662f\u5c06\u5728**tex\u6587\u4ef6\u5408\u5e76\uff08merge_tex_files\uff09** \u51fd\u6570\u4e2d\u7684\u4e00\u4e2a\u5b50\u51fd\u6570\u7684\u8c03\u7528\uff08merge_tex_files_\uff09\uff0c\u4e3b\u8981\u4f5c\u7528\u5c31\u662f\u5c06\u539f\u59cbtex\u4e2d\u7684\\input\u547d\u4ee4\u5185\u5bb9\u8fdb\u884c\u5408\u5e76\uff0c\u4f46\u5b9e\u9645\u8fc7\u7a0b\u4e2d\u5b58\u5728\u4e00\u4e2a\u95ee\u9898\uff0c\u901a\u8fc7debug\u627e\u5230\uff0c\u5177\u4f53\u7684debug\u4ee3\u7801\uff08\u4e5f\u5c31\u52a0\u4e86\u70b9print\uff09\u548c\u7ed3\u679c\u56fe\u9644\u5728\u4e86\u4e0b\u9762\r\n\r\n```python\r\ndef merge_tex_files_(project_foler, main_file, mode):\r\n \"\"\"\r\n Merge Tex project recrusively\r\n \"\"\"\r\n main_file = rm_comments(main_file)\r\n for s in reversed([q for q in re.finditer(r\"\\\\input\\{(.*?)\\}\", main_file, re.M)]):\r\n f = s.group(1)\r\n fp = os.path.join(project_foler, f)\r\n fp = find_tex_file_ignore_case(fp)\r\n if fp:\r\n with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()\r\n else:\r\n raise RuntimeError(f'\u627e\u4e0d\u5230{fp}\uff0cTex\u6e90\u6587\u4ef6\u7f3a\u5931\uff01')\r\n c = merge_tex_files_(project_foler, c, mode)\r\n main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]\r\n return main_file\r\n```\r\n\r\n**debug\u7684\u4ee3\u7801**\r\n```python\r\ndef merge_tex_files_(project_foler, main_file, mode):\r\n \"\"\"\r\n Merge Tex project recrusively\r\n \"\"\"\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print('======== IN merge_tex_files_(SUB FUN) Function ===========')\r\n # print('project_foler:{}\\nmain_file:{}\\nmode:{}'.format(project_foler,main_file,mode))\r\n ## ===\r\n\r\n main_file = rm_comments(main_file)\r\n for s in reversed([q for q in re.finditer(r\"\\\\input\\{(.*?)\\}\", main_file, re.M)]):\r\n ## === AAS ADDED FOR TEST ===\r\n print('======== IN LOOP of merge_tex_files_(SUB FUN)===========')\r\n print(\"s:\",s)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n f = s.group(1)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"f:\",f)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n fp = os.path.join(project_foler, f)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"fp1:\",fp)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n fp = find_tex_file_ignore_case(fp)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"fp2:\",fp)\r\n ## === AAS ADDED FOR TEST === \r\n\r\n if fp:\r\n with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()\r\n else:\r\n raise RuntimeError(f'\u627e\u4e0d\u5230{fp}\uff0cTex\u6e90\u6587\u4ef6\u7f3a\u5931\uff01')\r\n c = merge_tex_files_(project_foler, c, mode)\r\n main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]\r\n return main_file\r\n```\r\n\r\n**\u7ed3\u679c\u56fe**\r\n![image](https://github.com/binary-husky/gpt_academic/assets/62052010/a0e9b9c7-e416-4396-8ff8-5321807d23f7)\r\n\r\n**\u51fa\u9519\u90e8\u5206\u7684\u4ee3\u7801**\r\n```python\r\ndef find_tex_file_ignore_case(fp):\r\n dir_name = os.path.dirname(fp)\r\n base_name = os.path.basename(fp)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print('============ IN find_tex_file_ignore_case Fun ==========')\r\n print('dir_name:',dir_name)\r\n print('base_name',base_name)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n ## \u51fa\u9519\u7684\u95ee\u9898\u5728\u4e8e\u662fbbl\u6587\u4ef6\u5bfc\u5165\uff0c\u800c\u4e0d\u662ftex\uff0c\u5c1d\u8bd5\u4e00\u4e0b\u53d6\u6d88tex\u9650\u5236\r\n if not base_name.endswith('.tex'): base_name+='.tex'\r\n ## === AAS ADDED FOR TEST ===\r\n \r\n if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)\r\n # go case in-sensitive\r\n import glob\r\n for f in glob.glob(dir_name+'/*.tex'):\r\n base_name_s = os.path.basename(fp)\r\n if base_name_s.lower() == base_name.lower(): return f\r\n return None\r\n```\r\n\r\n* \u5b9e\u9645\u9519\u8bef\u539f\u56e0\uff1a\u7b80\u5355\u800c\u8a00\u5c31\u662f\uff1a\u8fd9\u4e2a\u5730\u65b9**\u53ea\u8003\u8651\u5230\u4e86tex\u5408\u5e76**\uff08`find_tex_file_ignore_case`\u51fd\u6570\uff09\uff0c\u8fd8\u5b58\u5728**\u6bd4\u5982`bbl`**\uff08\u53e6\u4e00\u79cd\u4e00\u79cd\u6587\u732e\u5f15\u7528\u7684\u683c\u5f0f\uff0c\u53ef\u4ee5\u76f4\u63a5\u585e\u5230tex\u91cc\u9762\u7f16\u8bd1\uff0c\u76f8\u5bf9\u6bd4\u8f83\u539f\u59cb\u4f46\u5c0f\u5de7\uff0c\u548c\u666e\u901a\u7684`.bib`\u5229\u7528`references`\u7a0d\u6709\u5dee\u5f02\uff09**\u6ca1\u6709\u8003\u8651\u5230**\uff0c\u6240\u4ee5\u9020\u6210\u4e86\u5408\u5e76\u8fc7\u7a0b\u4e2d\u7684tex\u6587\u4ef6\u7f3a\u5931\r\n\r\n* \u6539\u8fdb\u5efa\u8bae\uff1ainput\u8fd9\u4e2a\u5730\u65b9\u6765\u8bf4\u4e00\u822c\u786e\u5b9e\u53ea\u6709tex\uff0c\u7528find_tex_file_ignore_case\u8fd9\u4e2a\u51fd\u6570\u4e5f\u633a\u597d\u7684\uff0c\u4f46\u662f\u53ef\u4ee5\u8003\u8651\u4ee5\u4e0b\u5176\u4ed6\u60c5\u51b5\uff0c\u6bd4\u5982\u8bf4\u7eaf\u6587\u672c\uff08.txt)\uff0c\u5176\u4ed6code\uff08`.c, .cpp, .py`\u7b49\u7b49\uff09\uff0c\r\n\r\n- \u65b9\u6848\uff1a**\u76f4\u63a5\u53bb\u6389tex\u7684\u9650\u5236**\uff0c\u4ec0\u4e48\u90fd\u76f4\u63a5\u5f80\u91cc\u63d2\u5165\u5373\u53ef\uff0c\u7136\u540e\u4ea4\u7ed9tex\u7f16\u8bd1\uff0c\u5b9e\u9645\u4e0a\u4e5f\u662f\u8fd9\u6837\uff0c\u6240\u4ee5\u6ca1\u6709\u4ec0\u4e48\u5fc5\u8981\u5728input\u8fd9\u4e2a\u5730\u65b9\u628a\u53ea\u9650\u5236\u63d2\u5165tex\u3002\u5b9e\u5728\u6709\u9519\u8bef\u7684\u8bdd\u5176\u5b9e\u4ea4\u7ed9tex\u8f93\u51fa\u7136\u540e\u67e5\u770b\u5c31\u597d\u4e86\u3002\u4ee3\u7801\u65b9\u9762\u628a\u4e0b\u9762\u8fd9\u884c\u6ce8\u91ca\u6389\u5c31\u597d\u4e86\r\n```python\r\nif not base_name.endswith('.tex'): base_name+='.tex'\r\n```\r\n\r\n\r\n* p.s \u627e\u8fd9\u4e2a\u8fd8\u633a\u8d39\u4e8b\u548chhh\uff0c\u4e4d\u4e00\u770b\u8fd8\u4e0d\u77e5\u9053\u4ec0\u4e48\u60c5\u51b5\uff0c\u4f46\u5176\u5b9e\u5c0f\u95ee\u9898\r\n\r\n\u5728\u6ce8\u91ca\u6389\u4e4b\u540e\uff0c\u6682\u4e14\u5c31\u80fd\u6b63\u5e38\u4f7f\u7528\u4e86\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\nSee the Describe the bug part\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\nSee the Describe the bug part", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/19bd0c35ed05e6f99c8e3c0a8c994b1385341cae", "file_loc": "{'base_commit': '19bd0c35ed05e6f99c8e3c0a8c994b1385341cae', 'files': [{'path': 'crazy_functions/latex_fns/latex_toolbox.py', 'status': 'modified', 'Loc': {\"(None, 'find_tex_file_ignore_case', 281)\": {'add': [283], 'mod': [286]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["crazy_functions/latex_fns/latex_toolbox.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e24f077b68e38b679e5ca25853ea2c402f074ea3", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1120", "iss_label": "", "title": "[Feature]: \u5e0c\u671b\u80fd\u591f\u589e\u52a0azure openai gpt4 \u7684\u6a21\u578b\u9009\u9879", "body": "### Class | \u7c7b\u578b\n\n\u7a0b\u5e8f\u4e3b\u4f53\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\nRT", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/e24f077b68e38b679e5ca25853ea2c402f074ea3", "file_loc": "{'base_commit': 'e24f077b68e38b679e5ca25853ea2c402f074ea3', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [83]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [147]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "a799f769e4c48908c3efd64792384403392f2e82", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/67", "iss_label": "", "title": "Cluster faces during extract using dlib.chinese_whispers_clustering", "body": "I have had some success hacking together a pre-processing script to run over my training images. It uses [dlib.chinese_whispers_clustering](http://dlib.net/python/index.html#dlib.chinese_whispers_clustering) to group the found faces in the training data based on likeness. I think one of the keys to good results is good training sets, and this helps to prevent polluting the training data with other peoples faces as tends to be the case with Google image search sets or images with multiple faces.\r\n\r\nThere are a couple of ways I think this could be integrated into the project:\r\n\r\n1) during extract when generating face chips, discard non target faces (all faces not in the largest cluster)\r\n2) during convert where frames have multiple faces, identifying only the target face for replacement.\r\n\r\nHere's [the script](https://gist.github.com/badluckwiththinking/92dd6f155bc8babca6422b08b642d35d), sorry its a bit hacky, I just wanted something that worked and haven't cleaned it up. I'm not sure where I would begin to integrate it into the project, perhaps as an alternative plugin?\r\n\r\n", "code": null, "pr_html_url": "https://github.com/deepfakes/faceswap/pull/61", "commit_html_url": null, "file_loc": "{'base_commit': 'a799f769e4c48908c3efd64792384403392f2e82', 'files': [{'path': 'Dockerfile', 'status': 'modified', 'Loc': {'(None, None, 14)': {'add': [14]}, '(None, None, 10)': {'mod': [10, 11, 12]}, '(None, None, 16)': {'mod': [16]}, '(None, None, 18)': {'mod': [18]}}}, {'path': 'faceswap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 8, 17, 18, 19, 20]}}}, {'path': 'lib/DetectedFace.py', 'status': 'removed', 'Loc': {}}, {'path': 'lib/aligner.py', 'status': 'modified', 'Loc': {\"(None, 'get_align_mat', 25)\": {'mod': [26]}}}, {'path': 'lib/cli.py', 'status': 'modified', 'Loc': {\"('DirectoryProcessor', 'process_arguments', 34)\": {'add': [47], 'mod': [49, 51]}, '(None, None, None)': {'mod': [5]}, \"('DirectoryProcessor', 'process_directory', 51)\": {'mod': [56, 59]}, \"('DirectoryProcessor', None, 14)\": {'mod': [62]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [3, 4, 28]}, \"(None, 'detect_faces', 6)\": {'mod': [9, 11, 12, 13, 14, 15, 16]}}}, {'path': 'lib/model.py', 'status': 'removed', 'Loc': {}}, {'path': 'lib/training_data.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 3, 5], 'mod': [45]}, \"(None, 'get_training_data', 13)\": {'mod': [13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 26, 27, 29]}, \"(None, 'random_warp', 47)\": {'mod': [49]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16], 'mod': [1, 2]}, \"(None, 'get_folder', 8)\": {'mod': [10]}, \"(None, 'load_images', 18)\": {'mod': [18, 19, 20, 21, 22, 23, 24, 25, 26]}}}, {'path': 'plugins/Convert_Adjust.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, \"('Convert', None, 5)\": {'mod': [6, 7]}, \"('Convert', 'patch_image', 12)\": {'mod': [21]}}}, {'path': 'plugins/Convert_Masked.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [6]}, \"('Convert', None, 8)\": {'mod': [9, 10]}, \"('Convert', 'get_new_face', 51)\": {'mod': [54]}, \"('Convert', 'get_image_mask', 58)\": {'mod': [67]}}}, {'path': 'plugins/Extract_Align.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, \"('Extract', 'extract', 6)\": {'add': [7]}}}, {'path': 'plugins/Extract_Crop.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}}}, {'path': 'plugins/PluginLoader.py', 'status': 'modified', 'Loc': {\"('PluginLoader', None, 2)\": {'mod': [4, 5, 6, 9, 10, 11, 14, 15]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 64], 'mod': [7, 8, 9]}, \"('ConvertImage', 'process_image', 38)\": {'add': [48], 'mod': [42, 43, 44, 45, 47, 50, 51, 52, 53, 54, 57]}, \"('ConvertImage', None, 13)\": {'mod': [38, 39, 40]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}, \"('ExtractTrainingData', None, 8)\": {'mod': [18, 19]}, \"('ExtractTrainingData', 'process_image', 18)\": {'mod': [22, 23, 24, 25, 26, 28, 29, 30, 31]}}}, {'path': 'scripts/train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10], 'mod': [5, 6, 8, 9]}, \"('TrainingProcessor', 'process_arguments', 18)\": {'mod': [24, 25, 26, 27, 28, 29, 30]}, \"('TrainingProcessor', None, 12)\": {'mod': [89, 90, 91, 92, 93, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 107, 108, 109, 111, 113, 114, 115, 116]}, \"('TrainingProcessor', 'process', 118)\": {'mod': [119, 122, 123, 125, 127, 129, 131, 132, 133, 134, 135, 136, 138, 139, 140, 142, 143, 144, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/aligner.py", "lib/model.py", "lib/training_data.py", "plugins/Convert_Adjust.py", "plugins/Extract_Align.py", "plugins/Extract_Crop.py", "scripts/train.py", "faceswap.py", "plugins/PluginLoader.py", "plugins/Convert_Masked.py", "lib/DetectedFace.py", "lib/faces_detect.py", "lib/utils.py", "lib/cli.py", "scripts/convert.py", "scripts/extract.py"], "doc": [], "test": [], "config": ["Dockerfile"], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "f5dd18352c6640bc5c39a01642c7ac7356c0dea1", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/718", "iss_label": "bug", "title": "[Windows] cuda_path was not set if success on first check.", "body": "**Describe the bug**\r\nsetup.py file:\r\ncuDNN was not detected if `cuda_check` success in first check using \"nvcc -V\" because of `self.env.cuda_path` not set\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1, run `python setup.py` on windows 10 environment\r\n\r\n**Expected behavior**\r\ndetect cuDNN lib\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Windows 10\r\n\r\n**Additional context**\r\nI temporary disable first method to check CUDA so it working for now.\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/f5dd18352c6640bc5c39a01642c7ac7356c0dea1", "file_loc": "{'base_commit': 'f5dd18352c6640bc5c39a01642c7ac7356c0dea1', 'files': [{'path': 'lib/gpu_stats.py', 'status': 'modified', 'Loc': {\"('GPUStats', 'initialize', 64)\": {'mod': [92]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {\"('Checks', None, 314)\": {'add': [353]}, \"('Checks', 'cudnn_check', 458)\": {'add': [459]}, \"('Install', 'ask_continue', 542)\": {'add': [543]}, \"('Checks', 'cuda_check_linux', 423)\": {'mod': [442, 443, 444]}, \"('Checks', 'cuda_check_windows', 445)\": {'mod': [451]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py", "lib/gpu_stats.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "dea984efc1c720832d7c32513c806b4b67cc6560", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/590", "iss_label": "", "title": "Disable logging", "body": "In previous commits before the logging implementation, multiple GPUS were able to run different tasks simultaneously ( extract/train/convert ).\r\n\r\nAfter the logging commit, only 1 task can be run due to the log file being in use by the first process.\r\n\r\nIs there an option to disable logging or specify a log file instead?", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/dea984efc1c720832d7c32513c806b4b67cc6560", "file_loc": "{'base_commit': 'dea984efc1c720832d7c32513c806b4b67cc6560', 'files': [{'path': 'lib/cli.py', 'status': 'modified', 'Loc': {\"('ScriptExecutor', 'execute_script', 83)\": {'mod': [85]}, \"('DirOrFileFullPaths', None, 150)\": {'mod': [150]}, \"('FaceSwapArgs', 'get_global_arguments', 265)\": {'mod': [274, 275, 276, 277]}}}, {'path': 'lib/gui/utils.py', 'status': 'modified', 'Loc': {\"('FileHandler', '__init__', 36)\": {'mod': [48, 49, 50, 51, 57, 58]}, \"('ContextMenu', None, 332)\": {'mod': [334]}}}, {'path': 'lib/logger.py', 'status': 'modified', 'Loc': {\"(None, 'log_setup', 71)\": {'mod': [71, 79]}, \"(None, 'file_handler', 89)\": {'mod': [89, 91, 92, 93]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/cli.py", "lib/gui/utils.py", "lib/logger.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/1436", "iss_label": "bug", "title": "PNG images have a black background (no transparency)", "body": "### Description\r\nWhen trying do display a png image(with transparent background), it shows the background as black, didn't encouter the issue when trying with the cairo renderer.\r\n\r\n**Code**:\r\n```python\r\n img = ImageMobject(\"./dice.png\")\r\n self.play(FadeIn(img))\r\n```\r\n\r\n\r\n### Results\r\n\"result\"\r\n\r\n# Original image\r\n![dice](https://user-images.githubusercontent.com/38077008/110259246-8fdb3400-7f9e-11eb-992f-b658762c5830.png)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/3b1b/manim/commit/b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181", "file_loc": "{'base_commit': 'b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181', 'files': [{'path': 'manimlib/shaders/image/frag.glsl', 'status': 'modified', 'Loc': {'(None, None, 12)': {'mod': [12]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["manimlib/shaders/image/frag.glsl"]}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "e1c049bece420bc1190eb3ed4d5d9878c431aa5e", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/394", "iss_label": "", "title": "import readline is failing", "body": "I am trying to run examples_scenes.py and it threw a ModuleNotFoundError when it tried to import readline. This should be easy to resolve - just pip install readline right? Nope. readline apparently doesn't work on Windows, and I got this strange follow-up error below. I don't know what to do at this point. Help?\r\n\r\n\r\nc:\\Tensorexperiments\\manim>python -m manim example_scenes.py SquareToCircle -pl\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\Python36\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"C:\\Program Files\\Python36\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"c:\\Tensorexperiments\\manim\\manim.py\", line 4, in \r\n import manimlib.stream_starter\r\n File \"c:\\Tensorexperiments\\manim\\manimlib\\stream_starter.py\", line 4, in \r\n import readline\r\nModuleNotFoundError: No module named 'readline'\r\n\r\nc:\\Tensorexperiments\\manim>pip install readline\r\nCollecting readline\r\n Downloading https://files.pythonhosted.org/packages/f4/01/2cf081af8d880b44939a5f1b446551a7f8d59eae414277fd0c303757ff1b/readline-6.2.4.1.tar.gz (2.3MB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.3MB 8.5MB/s\r\n Complete output from command python setup.py egg_info:\r\n error: this module is not meant to work on Windows\r\nCommand \"python setup.py egg_info\" failed with error code 1 in C:\\Users\\SAMERN~1\\AppData\\Local\\Temp\\pip-install-z8maklzo\\readline\\", "code": null, "pr_html_url": "https://github.com/3b1b/manim/pull/672", "commit_html_url": null, "file_loc": "{'base_commit': 'e1c049bece420bc1190eb3ed4d5d9878c431aa5e', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 11)': {'add': [11]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "660d1d1e64c5e28e96bf9b8172cd87d1d809fd07", "is_iss": 0, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/5876", "iss_label": "bug\nseverity:medium", "title": "[Bug]: \"The model produces invalid content\"", "body": "### Is there an existing issue for the same bug?\r\n\r\n- [X] I have checked the existing issues.\r\n\r\n### Describe the bug and reproduction steps\r\n\r\nhttps://www.all-hands.dev/share?share_id=dab4a77e7d64e7a4dc6124dc672d3f4beb2d411a33155977425b821e292d4f4c\r\nThe LLM is `gpt-4o`\r\nIn the logs I got\r\n```yaml\r\n{'error': {'message': 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.', 'type': 'model_error', 'param': None, 'code': None}}\r\n```\r\n\r\n### OpenHands Installation\r\n\r\nDocker command in README\r\n\r\n### OpenHands Version\r\n\r\n0.17\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Logs, Errors, Screenshots, and Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/7045", "commit_html_url": null, "file_loc": "{'base_commit': '660d1d1e64c5e28e96bf9b8172cd87d1d809fd07', 'files': [{'path': 'openhands/llm/llm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [78]}, \"('LLM', 'wrapper', 180)\": {'mod': [220, 221, 222]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["openhands/llm/llm.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "scrapy", "repo_name": "scrapy", "base_commit": "7e8453cf1ec992e5df5cebfeda08552c58e7c9bc", "is_iss": 0, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2656", "iss_label": "", "title": "sos filepipelines 302", "body": "hi\r\n\r\n when i setting file_urls \"http://m.baidu.com/api?action=redirect&token=kpyysd&from=1014090y&type=app&dltype=new&refid=2650327114&tj=soft_5845028_88031597_%E8%AF%AD%E9%9F%B3%E6%90%9C%E7%B4%A2&refp=action_search&blink=da5b687474703a2f2f7265736765742e39312e636f6d2f536f66742f436f6e74726f6c6c65722e617368783f616374696f6e3d646f776e6c6f61642674706c3d312669643d34313034393931c658&crversion=1\"\r\n \r\n this url redirect 3 times so when i use scrap download it the scrapy retrun 302 how can i setting it can working ? please help me!\r\n![qq 20170316182328](https://cloud.githubusercontent.com/assets/3350372/23991950/1795e90a-0a76-11e7-9b19-4128bfdb3914.png)\r\n\r\n \r\n ", "code": null, "pr_html_url": "https://github.com/scrapy/scrapy/pull/2616", "commit_html_url": null, "file_loc": "{'base_commit': '7e8453cf1ec992e5df5cebfeda08552c58e7c9bc', 'files': [{'path': 'docs/topics/media-pipeline.rst', 'status': 'modified', 'Loc': {'(None, None, 324)': {'add': [324]}}}, {'path': 'scrapy/pipelines/files.py', 'status': 'modified', 'Loc': {\"('FilesPipeline', '__init__', 226)\": {'mod': [252]}}}, {'path': 'scrapy/pipelines/media.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 7]}, \"('MediaPipeline', None, 16)\": {'add': [29, 95], 'mod': [27]}, \"('MediaPipeline', '_check_media_to_download', 96)\": {'mod': [106]}}}, {'path': 'tests/mockserver.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 10, 14, 122]}, \"('Root', '__init__', 152)\": {'add': [162]}}}, {'path': 'tests/test_pipeline_media.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 84]}, \"('BaseMediaPipelineTestCase', None, 22)\": {'add': [24]}, \"('MediaPipelineTestCase', 'test_use_media_to_download_result', 245)\": {'add': [251]}, \"('BaseMediaPipelineTestCase', 'setUp', 26)\": {'mod': [28]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/pipelines/media.py", "scrapy/pipelines/files.py", "tests/mockserver.py"], "doc": ["docs/topics/media-pipeline.rst"], "test": ["tests/test_pipeline_media.py"], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "cc00f21a358923c03e334e245d58df0853d10661", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/57069", "iss_label": "networking\nmodule\nsupport:network\nnxos\nbug\naffects_2.7\ncisco", "title": "nxos_vpc breaks using default vrf", "body": "##### SUMMARY\r\nWhen using pkl_vrf\": \"default\" command is missing vrf value\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nModule: nxos_vpc\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.7.2\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]\r\n```\r\n\r\n##### CONFIGURATION\r\nasterisk due privacy\r\n```\r\nCACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile\r\nCACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /**/config/ansible/facts\r\nDEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/**/config/ansible/**/hosts.yml']\r\nDISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = True\r\nHOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False\r\nRETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n```\r\n nxos_vpc:\r\n domain: 10\r\n pkl_src: 1.1.1.2\r\n pkl_dest: 1.1.1.1\r\n pkl_vrf: default\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n```\r\n \"commands\": [\r\n \"vpc domain 10\",\r\n \"peer-keepalive destination 1.1.1.1 source 1.1.1.2 vrf default\",\r\n```\r\n##### ACTUAL RESULTS\r\n```\r\n \"commands\": [\r\n \"vpc domain 10\",\r\n \"peer-keepalive destination 1.1.1.1 source 1.1.1.2\",\r\n```\r\n\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/57370", "commit_html_url": null, "file_loc": "{'base_commit': 'cc00f21a358923c03e334e245d58df0853d10661', 'files': [{'path': 'lib/ansible/modules/network/nxos/nxos_vpc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [60, 63, 277]}, \"(None, 'main', 317)\": {'add': [396], 'mod': [392]}, \"(None, 'get_vpc', 222)\": {'mod': [265, 266, 267, 268, 269, 270, 271, 272, 273, 274]}, \"(None, 'get_commands_to_config_vpc', 278)\": {'mod': [288]}}}, {'path': 'test/units/modules/network/nxos/test_nxos_vpc.py', 'status': 'modified', 'Loc': {\"('TestNxosVpcModule', 'setUp', 31)\": {'add': [33]}, \"('TestNxosVpcModule', 'tearDown', 40)\": {'add': [41]}, \"('TestNxosVpcModule', 'load_fixtures', 45)\": {'add': [54], 'mod': [56]}, \"('TestNxosVpcModule', 'test_nxos_vpc_present', 58)\": {'add': [66]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/network/nxos/nxos_vpc.py"], "doc": [], "test": ["test/units/modules/network/nxos/test_nxos_vpc.py"], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "44b53141748d29220441e0799b54ea3130ac6753", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/78076", "iss_label": "support:core\nhas_pr\ndocs\naffects_2.12", "title": "Minor change to the getting started diagram", "body": "### Summary\n\nI was looking through the new Ansible getting started guide and noticed one of the nodes in the diagram has a duplicate label. s/node 2/node 3\n\n### Issue Type\n\nDocumentation Report\n\n### Component Name\n\nhttps://github.com/ansible/ansible/blob/devel/docs/docsite/rst/images/ansible_basic.svg\n\n### Ansible Version\n\n```console\n$ ansible --version\r\nansible [core 2.12.6]\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/dnaro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.10/site-packages/ansible\r\n ansible collection location = /home/dnaro/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/bin/ansible\r\n python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 12.0.1 20220308 (Red Hat 12.0.1-0)]\r\n jinja version = 3.0.3\r\n libyaml = True\n```\n\n\n### Configuration\n\n```console\n$ ansible-config dump --only-changed -t all\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n~\n```\n\n\n### OS / Environment\n\nFedora 36\n\n### Additional Information\n\nIt corrects something that is wrong.\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/78077", "commit_html_url": null, "file_loc": "{'base_commit': '44b53141748d29220441e0799b54ea3130ac6753', 'files': [{'path': 'docs/docsite/rst/images/ansible_basic.svg', 'status': 'modified', 'Loc': {'(None, None, 27)': {'mod': [27, 28, 29]}, '(None, None, 35)': {'mod': [35]}, '(None, None, 51)': {'mod': [51]}, '(None, None, 67)': {'mod': [67]}, '(None, None, 192)': {'mod': [192]}, '(None, None, 194)': {'mod': [194, 195, 196, 197, 198, 199, 200]}, '(None, None, 203)': {'mod': [203]}, '(None, None, 205)': {'mod': [205]}, '(None, None, 207)': {'mod': [207]}, '(None, None, 209)': {'mod': [209]}, '(None, None, 211)': {'mod': [211]}, '(None, None, 213)': {'mod': [213]}, '(None, None, 215)': {'mod': [215]}, '(None, None, 217)': {'mod': [217]}, '(None, None, 219)': {'mod': [219]}, '(None, None, 221)': {'mod': [221]}, '(None, None, 223)': {'mod': [223, 224, 225, 226]}, '(None, None, 230)': {'mod': [230]}, '(None, None, 233)': {'mod': [233]}, '(None, None, 236)': {'mod': [236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323]}, '(None, None, 326)': {'mod': [326, 327, 328, 329, 330, 331, 332]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/docsite/rst/images/ansible_basic.svg"], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "0335d05f437eb59bcb77a58ef7819562f298ba79", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/3730", "iss_label": "", "title": "ansible stacktrace", "body": "simple ansible facts now stack trace:\n\n```\nansible -m setup -c local -i ~/hosts 127.0.0.1\n```\n\n127.0.0.1 | FAILED => Traceback (most recent call last):\n File \"/home/bcoca/work/ansible/lib/ansible/runner/**init**.py\", line 367, in _executor\n exec_rc = self._executor_internal(host, new_stdin)\n File \"/home/bcoca/work/ansible/lib/ansible/runner/__init__.py\", line 389, in _executor_internal\n host_variables = self.inventory.get_variables(host)\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/__init__.py\", line 284, in get_variables\n self._vars_per_host[hostname] = self._get_variables(hostname)\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/__init__.py\", line 294, in _get_variables\n vars_results = [ plugin.run(host) for plugin in self._vars_plugins ]\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/vars_plugins/group_vars.py\", line 43, in run\n self.pb_basedir = os.path.abspath(inventory.playbook_basedir())\n File \"/usr/lib/python2.7/posixpath.py\", line 343, in abspath\n if not isabs(path):\n File \"/usr/lib/python2.7/posixpath.py\", line 53, in isabs\n return s.startswith('/')\nAttributeError: 'NoneType' object has no attribute 'startswith'\n\nbisect showed 16efb45735899737aacc106f89014ee9551fd625 as culprit\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ansible/ansible/commit/0335d05f437eb59bcb77a58ef7819562f298ba79", "file_loc": "{'base_commit': '0335d05f437eb59bcb77a58ef7819562f298ba79', 'files': [{'path': 'lib/ansible/inventory/vars_plugins/group_vars.py', 'status': 'modified', 'Loc': {\"('VarsModule', 'run', 38)\": {'mod': [43]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/inventory/vars_plugins/group_vars.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "f841c2803a1e36bb6f392c466d36b669f9243464", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/77073", "iss_label": "module\nsupport:core\nfeature\nP3\naffects_2.13", "title": "Add support for deb822 apt sources with apt_repository", "body": "### Summary\n\nDebian has deprecated APT's original `sources.list` file format. As of Debian 11 (and Ubuntu 20.10), APT uses [the newer \"DEB822\" format](https://manpages.debian.org/unstable/apt/sources.list.5.en.html#DEB822-STYLE_FORMAT) by default. This format has been supported since APT 1.1, which goes back to Ubuntu 16.04 and Debian 9. \r\n\r\nAnsible should generate DEB822 `.sources` files instead of legacy `.list` files on supported systems.\n\n### Issue Type\n\nFeature Idea\n\n### Component Name\n\napt_repository\n\n### Additional Information\n\nHere's an example of the deb822 format:\r\n\r\n```\r\nTypes: deb\r\nURIs: http://deb.debian.org/debian\r\nSuites: bullseye\r\nComponents: main contrib non-free\r\n```\r\n\r\nThe `apt_repository` module can behave a lot more like the `yum_repository` one with this new format.\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/80018", "commit_html_url": null, "file_loc": "{'base_commit': 'f841c2803a1e36bb6f392c466d36b669f9243464', 'files': [{'path': 'test/integration/targets/setup_deb_repo/tasks/main.yml', 'status': 'modified', 'Loc': {'(None, None, 61)': {'add': [61]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["test/integration/targets/setup_deb_repo/tasks/main.yml"], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "6a71aef6c5b2f4c26d5f6522cd5b1a85cd78ee6b", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/58126", "iss_label": "networking\npython3\nmodule\nsupport:network\nbug\naffects_2.8\nios\ncisco", "title": "ios_facts module not enumerating ansible_net_model in Ansible 2.8", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\n\r\nios_facts module not enumerating ansible_net_model in Ansible 2.8\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\nios_facts\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```paste below\r\nansible 2.8.1\r\n config file = /home/ryan/test/ansible.cfg\r\n configured module search path = ['/home/ryan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.6/site-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.6.8 (default, Apr 25 2019, 21:02:35) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\n $ ansible-config dump --only-changed\r\nDEFAULT_GATHERING(/home/ryan/test/ansible.cfg) = explicit\r\nDEFAULT_HOST_LIST(/home/ryan/test/ansible.cfg) = ['/home/ryan/test/inventory']\r\nDEPRECATION_WARNINGS(/home/ryan/test/ansible.cfg) = False\r\nHOST_KEY_CHECKING(/home/ryan/test/ansible.cfg) = False\r\nPERSISTENT_COMMAND_TIMEOUT(/home/ryan/test/ansible.cfg) = 30\r\nPERSISTENT_CONNECT_TIMEOUT(/home/ryan/test/ansible.cfg) = 30\r\nRETRY_FILES_ENABLED(/home/ryan/test/ansible.cfg) = False\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\nHost OS: CentOS 7 virtual machine (VMware player)\r\nPython versions: Reproducible on 2.7.5 and 3.6\r\n\r\nTested on:\r\nCSR1000v running IOS-XE 16.09.03\r\nISR4331 running IOS-XE 16.06.03\r\nCatalyst 3850 running IOS-XE 03.06.03E\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nRun playbook to gather ios_facts, ansible_net_model is not included in any subset. Should always be included, per: https://docs.ansible.com/ansible/latest/modules/ios_facts_module.html\r\n\r\n\r\n```yaml\r\n name: IOS Facts gathering\r\n hosts: CSRTEST\r\n connection: network_cli\r\n gather_facts: yes\r\n tasks:\r\n - name: Gather facts from device\r\n ios_facts:\r\n gather_subset: all\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\nExpecting ansible_net_model back as one of the facts gathered.\r\n\r\n##### ACTUAL RESULTS\r\n\r\n\r\n```paste below\r\nTASK [Gather facts from device] ****************************************************************************************************************************************************************************************************************************************\r\ntask path: /home/ryan/test/test_facts.yml:6\r\n attempting to start connection\r\n using connection plugin network_cli\r\n found existing local domain socket, using it!\r\n updating play_context for connection\r\n\r\n local domain socket path is /home/ryan/.ansible/pc/5485150d9c\r\n ESTABLISH LOCAL CONNECTION FOR USER: ryan\r\n EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379 `\" && echo ansible-tmp-1561047456.179465-161563255687379=\"` echo /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379 `\" ) && sleep 0'\r\nUsing module file /usr/local/lib/python3.6/site-packages/ansible/modules/network/ios/ios_facts.py\r\n PUT /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/tmp6gh5jigs TO /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py\r\n EXEC /bin/sh -c 'chmod u+x /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/ /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py && sleep 0'\r\n EXEC /bin/sh -c '/usr/bin/python /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py && sleep 0'\r\n EXEC /bin/sh -c 'rm -f -r /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/ > /dev/null 2>&1 && sleep 0'\r\nok: [CSRTEST] => {\r\n \"ansible_facts\": {\r\n \"ansible_net_all_ipv4_addresses\": [\r\n \"192.168.102.133\"\r\n ],\r\n \"ansible_net_all_ipv6_addresses\": [],\r\n \"ansible_net_api\": \"cliconf\",\r\n \"ansible_net_config\": \"!\\n! Last configuration change at 16:13:51 UTC Thu Jun 20 2019\\n!\\nversion 16.9\\nservice timestamps debug datetime msec\\nservice timestamps log datetime msec\\nplatform qfp utilization monitor load 80\\nno platform punt-keepalive disable-kernel-core\\nplatform console virtual\\n!\\nhostname CSRTEST\\n!\\nboot-start-marker\\nboot-end-marker\\n!\\n!\\n!\\nno aaa new-model\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nlogin on-success log\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nsubscriber templating\\n! \\n! \\n! \\n! \\n!\\nmultilink bundle-name authenticated\\n!\\n!\\n!\\n!\\n!\\ncrypto pki trustpoint TP-self-signed-3768273344\\n enrollment selfsigned\\n subject-name cn=IOS-Self-Signed-Certificate-3768273344\\n revocation-check none\\n rsakeypair TP-self-signed-3768273344\\n!\\n!\\ncrypto pki certificate chain TP-self-signed-3768273344\\n certificate self-signed 01\\n 30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 \\n 31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 \\n 69666963 6174652D 33373638 32373333 3434301E 170D3139 30363230 31363134 \\n 30395A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 \\n 4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D33 37363832 \\n 37333334 34308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 \\n 0A028201 0100891F 68316AAF AF54176F 7D9C39F5 E34FB187 F4D88C88 8265FDE9 \\n B3A338A1 FADD5622 1A2887D2 1E655477 9EDEA72C 94EAB9C4 744C428C 83BC30A1 \\n E18B6EBC 69856EC8 4F5E8649 9D442076 3544F7D1 01AC0B0B 76E9CBE1 AEFA2C4A \\n 4EB0EE8B 29895287 97A9C7CC 586A0241 19DC79E9 35A415A5 7D976DAB 7E072350 \\n C2617E80 F8DB84D1 CFC0EBE5 3194A8C4 2E7AAC3C 7F97D423 2B016D97 C12164A6 \\n D75B73E8 A9EA96ED 079CAB76 2B8DEA2E BBB61836 C913E020 B0F7659D DA4CF838 \\n 7FCC72B5 522932D6 37196DD2 2897D197 BD6FD0C0 576CED54 85A7C94B 029BC4A3 \\n F0D7F7CC 4AAFC50A 297B6E6E ECF97699 2062D939 38DD585D E78A2794 40381513 \\n 75AEAA98 F8550203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF \\n 301F0603 551D2304 18301680 147DF3A5 74A80322 7F0D4A33 C839CE1E 479BCFD0 \\n 8C301D06 03551D0E 04160414 7DF3A574 A803227F 0D4A33C8 39CE1E47 9BCFD08C \\n 300D0609 2A864886 F70D0101 05050003 82010100 87C47448 FAE908F7 47B564D7 \\n 992A8E16 24966357 D0B864AB B32BB538 6A5371F3 0BF093E8 D0E461AC 2ED99B84 \\n 768E700C A88464AA B8E0B774 2308D4A2 881495B7 AFE1F6D7 3D25AFEE 2A7D6653 \\n 6814B4AC E4189640 15C0003E 1E1EE9B1 6E3FF371 448CA017 DA622BCD 49EF07C5 \\n FB4D6859 208FF4FE 29AEB2F3 BB9BA26E 1D140B6A B2C4DADA 913D4846 84370AF0 \\n A67E3D78 F0E9CE1E 9D344542 2732C2A7 70A50162 B32BBE36 BF3382AD 641DB7A6 \\n 1AE1FD10 2CFEC3A6 1ACCD4FD 58E48276 9F2417F4 1871A9F7 11C61604 09E4BBEB \\n 2D821D14 815A48FC 7B14A7C2 8766F1B1 7C04112A 139DB760 EFF339D0 1BA82B52 \\n 5E85BBA9 3FC49134 4FEDD944 BA27F4A4 1317652C\\n \\tquit\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nlicense udi pid CSR1000V sn 9U4DE1R3P2Y\\nlicense boot level ax\\nno license smart enable\\ndiagnostic bootup level minimal\\n!\\nspanning-tree extend system-id\\n!\\n!\\n!\\nusername ansible privilege 15 secret 5 $1$Ax9o$F2JTz/1dXjNSB21muGqxU1\\n!\\nredundancy\\n!\\n!\\n!\\n!\\n!\\n!\\n! \\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n! \\n! \\n!\\n!\\ninterface GigabitEthernet1\\n ip address dhcp\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\ninterface GigabitEthernet2\\n no ip address\\n shutdown\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\ninterface GigabitEthernet3\\n no ip address\\n shutdown\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\nip forward-protocol nd\\nip http server\\nip http authentication local\\nip http secure-server\\nip route 0.0.0.0 0.0.0.0 GigabitEthernet1 dhcp\\n!\\nip ssh version 2\\n!\\n!\\n!\\n!\\n!\\ncontrol-plane\\n!\\n!\\n!\\n!\\n!\\n!\\nline con 0\\n stopbits 1\\nline vty 0 4\\n login local\\nline vty 5 15\\n login local\\n!\\n!\\n!\\n!\\n!\\n!\\nend\",\r\n \"ansible_net_filesystems\": [\r\n \"bootflash:\"\r\n ],\r\n \"ansible_net_filesystems_info\": {\r\n \"bootflash:\": {\r\n \"spacefree_kb\": 6801160,\r\n \"spacetotal_kb\": 7712692\r\n }\r\n },\r\n \"ansible_net_gather_subset\": [\r\n \"hardware\",\r\n \"default\",\r\n \"interfaces\",\r\n \"config\"\r\n ],\r\n \"ansible_net_hostname\": \"CSRTEST\",\r\n \"ansible_net_image\": \"bootflash:packages.conf\",\r\n \"ansible_net_interfaces\": {\r\n \"GigabitEthernet1\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [\r\n {\r\n \"address\": \"192.168.102.133\",\r\n \"subnet\": \"24\"\r\n }\r\n ],\r\n \"lineprotocol\": \"up\",\r\n \"macaddress\": \"000c.29a5.1122\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"up\",\r\n \"type\": \"CSR vNIC\"\r\n },\r\n \"GigabitEthernet2\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [],\r\n \"lineprotocol\": \"down\",\r\n \"macaddress\": \"000c.29a5.112c\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"administratively down\",\r\n \"type\": \"CSR vNIC\"\r\n },\r\n \"GigabitEthernet3\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [],\r\n \"lineprotocol\": \"down\",\r\n \"macaddress\": \"000c.29a5.1136\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"administratively down\",\r\n \"type\": \"CSR vNIC\"\r\n }\r\n },\r\n \"ansible_net_iostype\": \"IOS-XE\",\r\n \"ansible_net_memfree_mb\": 1863849,\r\n \"ansible_net_memtotal_mb\": 2182523,\r\n \"ansible_net_neighbors\": {},\r\n \"ansible_net_python_version\": \"2.7.5\",\r\n \"ansible_net_serialnum\": \"9U4DE1R3P2Y\",\r\n \"ansible_net_system\": \"ios\",\r\n \"ansible_net_version\": \"16.09.03\"\r\n },\r\n \"changed\": false,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"auth_pass\": null,\r\n \"authorize\": null,\r\n \"gather_subset\": [\r\n \"all\"\r\n ],\r\n \"host\": null,\r\n \"password\": null,\r\n \"port\": null,\r\n \"provider\": null,\r\n \"ssh_keyfile\": null,\r\n \"timeout\": null,\r\n \"username\": null\r\n }\r\n }\r\n}\r\nMETA: ran handlers\r\nMETA: ran handlers\r\n\r\nPLAY RECAP *************************************************************************************************************************************************************************************************************************************************************\r\nCSRTEST : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0\r\n\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/58174", "commit_html_url": null, "file_loc": "{'base_commit': '6a71aef6c5b2f4c26d5f6522cd5b1a85cd78ee6b', 'files': [{'path': 'lib/ansible/plugins/cliconf/ios.py', 'status': 'modified', 'Loc': {\"('Cliconf', 'get_device_info', 199)\": {'mod': [210, 211, 212]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/plugins/cliconf/ios.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "d1cd6ee56d492deef40f6f2f178832a1815730a5", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/37734", "iss_label": "cloud\nazure\nmodule\naffects_2.4\nsupport:certified\nfeature", "title": "Add network interface to Load Balancer Backend pool in azure_rm_networkinterface", "body": "##### ISSUE TYPE\r\n - Feature Idea\r\n\r\n##### COMPONENT NAME\r\nazure_rm_networkinterface\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible --version\r\nansible 2.4.3.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/dgermain/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]\r\n```\r\n\r\n##### CONFIGURATION\r\n```\r\nansible-config dump --only-changed\r\n#empty return\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### SUMMARY\r\nIn current azure loadbalancer module, you can create Backend pools, but you don't have the possibility to add network interfaces in this Backend pool, neither in *azure_rm_networkinterface* nor in *azure_rm_loadbalancer*.\r\nAs an example, this feature is present in Powershell azure CLI, when handling network interfaces :\r\n```\r\n $nic = Get-AzurermNetworkInterface -name $virtualnetworkcardname\" -resourcegroupname $resourceGroup\r\n $nic.IpConfigurations[0].LoadBalancerBackendAddressPools=$backend\r\n Set-AzureRmNetworkInterface -NetworkInterface $nic\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\nAs far as I can tell, you don't have this option in the ansible module\r\n\r\n##### EXPECTED RESULTS\r\nHave an option to allow this\r\n\r\n##### ACTUAL RESULTS\r\nNo option to do so", "code": null, "pr_html_url": "github.com/ansible/ansible/pull/38643", "commit_html_url": null, "file_loc": "{'base_commit': 'd1cd6ee56d492deef40f6f2f178832a1815730a5', 'files': [{'path': 'lib/ansible/module_utils/azure_rm_common.py', 'status': 'modified', 'Loc': {\"('AzureRMModuleBase', None, 216)\": {'add': [605]}, '(None, None, None)': {'mod': [131]}}}, {'path': 'lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [66, 73, 153, 198, 210, 239, 351], 'mod': [55, 57, 58, 59, 60, 61, 62, 63, 64, 65, 68, 127, 128, 160, 162, 163, 165, 186, 197, 207, 220, 222, 233, 234, 277, 286]}, \"(None, 'nic_to_dict', 306)\": {'add': [313]}, \"('AzureRMNetworkInterface', 'exec_module', 411)\": {'add': [525], 'mod': [427, 428, 429, 431, 432, 435, 468, 469, 470, 473, 477, 514, 515, 516, 530, 532, 534]}, \"('AzureRMNetworkInterface', None, 356)\": {'add': [600], 'mod': [594]}, \"('AzureRMNetworkInterface', 'construct_ip_configuration_set', 601)\": {'add': [606]}, \"('AzureRMNetworkInterface', '__init__', 358)\": {'mod': [364, 371, 372, 380, 386, 392, 393, 397]}, \"('AzureRMNetworkInterface', 'get_security_group', 594)\": {'mod': [597]}}}, {'path': 'test/integration/targets/azure_rm_networkinterface/tasks/main.yml', 'status': 'modified', 'Loc': {'(None, None, 19)': {'add': [19]}, '(None, None, 124)': {'add': [124]}, '(None, None, 131)': {'add': [131]}, '(None, None, 148)': {'add': [148]}, '(None, None, 164)': {'add': [164]}, '(None, None, 179)': {'add': [179]}, '(None, None, 189)': {'add': [189]}, '(None, None, 36)': {'mod': [36]}, '(None, None, 40)': {'mod': [40, 41]}, '(None, None, 43)': {'mod': [43]}, '(None, None, 48)': {'mod': [48]}, '(None, None, 52)': {'mod': [52, 53]}, '(None, None, 55)': {'mod': [55]}, '(None, None, 78)': {'mod': [78]}, '(None, None, 90)': {'mod': [90]}, '(None, None, 113)': {'mod': [113]}, '(None, None, 137)': {'mod': [137]}, '(None, None, 159)': {'mod': [159]}, '(None, None, 176)': {'mod': [176]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nConfig\nTest"}, "loctype": {"code": ["lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py", "lib/ansible/module_utils/azure_rm_common.py"], "doc": [], "test": [], "config": ["test/integration/targets/azure_rm_networkinterface/tasks/main.yml"], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "6f8c1da0c805f334b8598fd2556f7ed92dc9348e", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/79277", "iss_label": "bug\ntraceback\naffects_2.13", "title": "ansible-test fails to report the proper error when validating ansible-doc", "body": "### Summary\n\nThe utility ansible-test sanity is fantastic and does its job. Unfortunately, when validating the ansible-doc, if the YAML is malformed, you'll get a parsing error instead of the actual YAML error.\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nansible-test\n\n### Ansible Version\n\n```console\n$ ansible --version\r\nansible [core 2.13.6rc1.post0] (stable-2.13 33852737fd) last updated 2022/10/31 21:51:24 (GMT +200)\r\n config file = None\r\n configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/warkdev/ansible/lib/ansible\r\n ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /home/warkdev/ansible/bin/ansible\r\n python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]\r\n jinja version = 3.1.2\r\n libyaml = False\n```\n\n\n### Configuration\n\n```console\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\n```\n\n\n### OS / Environment\n\nDebian 12\n\n### Steps to Reproduce\n\n* Generate an ansible module that you want to validate and introduce invalid YAML syntax in the ansible-doc\r\n* Run ansible-test sanity against that module\r\n* Verify that the error is happening\r\n\r\nI've tracked down the issue till this code: https://github.com/ansible/ansible/blob/stable-2.13/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py#L157\n\n### Expected Results\n\nERROR: Found 2 yamllint issue(s) which need to be resolved:\r\nERROR: plugins/modules/axway_cft_about_info.py:36:15: error: RETURN: syntax error: mapping values are not allowed here (syntax)\r\nERROR: plugins/modules/axway_cft_about_info.py:36:15: unparsable-with-libyaml: None - mapping values are not allowed in this context\n\n### Actual Results\n\n```console\nTraceback (most recent call last):\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py\", line 153, in parse_yaml\r\n data = yaml_load(value, Loader=loader)\r\n File \"/root/.ansible/test/venv/sanity.validate-modules/3.10/487215fd/lib/python3.10/site-packages/yaml/__init__.py\", line 81, in load\r\n return loader.get_single_data()\r\n File \"/root/.ansible/test/venv/sanity.validate-modules/3.10/487215fd/lib/python3.10/site-packages/yaml/constructor.py\", line 49, in get_single_data\r\n node = self.get_single_node()\r\n File \"yaml/_yaml.pyx\", line 673, in yaml._yaml.CParser.get_single_node\r\n File \"yaml/_yaml.pyx\", line 687, in yaml._yaml.CParser._compose_document\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 845, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 845, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 847, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 860, in yaml._yaml.CParser._parse_next_event\r\nyaml.scanner.ScannerError: mapping values are not allowed in this context\r\n in \"\", line 9, column 15\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate.py\", line 6, in \r\n main()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2475, in main\r\n run()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2363, in run\r\n mv1.validate()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2156, in validate\r\n doc_info, docs = self._validate_docs()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 1080, in _validate_docs\r\n data, errors, traces = parse_yaml(doc_info['RETURN']['value'],\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py\", line 157, in parse_yaml\r\n e.problem_mark.line += lineno - 1\r\nAttributeError: attribute 'line' of 'yaml._yaml.Mark' objects is not writable\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/79682", "commit_html_url": null, "file_loc": "{'base_commit': '6f8c1da0c805f334b8598fd2556f7ed92dc9348e', 'files': [{'path': 'test/integration/targets/ansible-test-sanity-validate-modules/runme.sh', 'status': 'modified', 'Loc': {'(None, None, 7)': {'mod': [7]}}}, {'path': 'test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py', 'status': 'modified', 'Loc': {\"(None, 'parse_yaml', 137)\": {'mod': [157, 158, 161]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py"], "doc": [], "test": [], "config": [], "asset": ["test/integration/targets/ansible-test-sanity-validate-modules/runme.sh"]}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "d97080174e9bbebd27a967368934ef91d1f28f64", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/32070", "iss_label": "networking\naffects_2.4\nsupport:core\nnxos\nbug\ncisco", "title": "Occasional failures with NXOS modules", "body": "##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nnxos modules\r\n\r\n##### ANSIBLE VERSION\r\nansible 2.4.0.0\r\n config file = /project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg\r\n configured module search path = [u'/etc/ansible/roles/plugins/library', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]\r\n\r\n\r\n##### CONFIGURATION\r\nDEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = [u'/etc/ansible/roles/plugins/action', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/actions']\r\nDEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = [u'/etc/ansible/roles/plugins/callback', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/callbacks']\r\nDEFAULT_CALLBACK_WHITELIST(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = ['profile_tasks']\r\nDEFAULT_FILTER_PLUGIN_PATH(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/plugins/filter']\r\nDEFAULT_FORCE_HANDLERS(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = True\r\nDEFAULT_HOST_LIST(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/inventory']\r\nDEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = [u'/etc/ansible/roles/plugins/library', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/modules']\r\nDEFAULT_ROLES_PATH(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles-dev', u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles', u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles-base-shell']\r\nHOST_KEY_CHECKING(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = False\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 16.04\r\nNXOS: version 7.0(3)I7(1)\r\n\r\n##### SUMMARY\r\nI observe non deterministic failures with the nxos modules when configuring 9200 series switches, in this specific case a 92160.\r\n\r\n##### STEPS TO REPRODUCE\r\nSadly this is difficult to reproduce. I have a playbook which configures a couple of dozen ports on several switches, each taking a dozen or more tasks. This is a sufficient number of tasks to occasionally trigger a failure of a task. Running the playbook again most likely will result in no errors.\r\n\r\nPlaybook https://gist.github.com/jrosser/b4d88748f5b1323828a8f2f266596ead\r\n\r\n##### EXPECTED RESULTS\r\nAll tasks to run without error. Running with -vvvv gives no insight into the communication with the switch so doesn't provide any useful debug.\r\n\r\n##### ACTUAL RESULTS\r\nVery occasionally one or more tasks will fail.\r\n```\r\nTASK [Ensure all layer 2 interfaces are up] ***********************************************************************************************************\r\nTuesday 24 October 2017 10:54:15 +0000 (0:00:21.378) 0:01:00.450 ******* \r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/1', u'description': u'to infra0-1-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/1', u'description': u'to infra0-1-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/2', u'description': u'to infra0-2-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/2', u'description': u'to infra0-2-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/3', u'description': u'to infra0-3-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/3', u'description': u'to infra0-3-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/4', u'description': u'to infra0-4-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/4', u'description': u'to infra0-4-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/5', u'description': u'to infra0-5-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/5', u'description': u'to infra0-5-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/6', u'description': u'to infra0-6-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/6', u'description': u'to infra0-6-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/7', u'description': u'to infra0-7-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/7', u'description': u'to infra0-7-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/8', u'description': u'to infra0-8-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/8', u'description': u'to infra0-8-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/9', u'description': u'to infra0-1-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/9', u'description': u'to infra0-1-b505-9'})\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: string indices must be integers, not str\r\nfailed: [fbs0-b505-10] (item={u'interface': u'Ethernet1/10', u'description': u'to infra0-2-b505-9'}) => {\"changed\": false, \"failed\": true, \"item\": {\"description\": \"to infra0-2-b505-9\", \"interface\": \"Ethernet1/10\"}, \"module_stderr\": \"Traceback (most recent call last):\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 710, in \\n main()\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 701, in main\\n normalized_interface)\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 534, in smart_existing\\n existing = get_interface(normalized_interface, module)\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 281, in get_interface\\n interface_table = body['TABLE_interface']['ROW_interface']\\nTypeError: string indices must be integers, not str\\n\", \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\", \"rc\": 0}\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/10', u'description': u'to infra0-2-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/11', u'description': u'to infra0-3-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/11', u'description': u'to infra0-3-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/12', u'description': u'to infra0-4-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/12', u'description': u'to infra0-4-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/13', u'description': u'to infra0-5-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/13', u'description': u'to infra0-5-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/14', u'description': u'to infra0-6-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/14', u'description': u'to infra0-6-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/15', u'description': u'to infra0-7-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/15', u'description': u'to infra0-7-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/16', u'description': u'to infra0-8-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/16', u'description': u'to infra0-8-b505-9'})\r\nok: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/47', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/47', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/48', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/48', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\n\r\n\r\nTASK [Ensure vrrpv3 is applied for vlans that need it] ************************************************************************************************\r\nTuesday 24 October 2017 11:01:48 +0000 (0:00:11.191) 0:08:33.606 ******* \r\nskipping: [fbs0-b505-9] => (item={u'vrf': u'default', u'vlan_id': 999}) \r\nok: [fbs0-b505-9] => (item={u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 23, u'description': u'storage-clients', u'address': u'10.23.128.5'}, u'vrf': u'STORAGE', u'address': u'10.23.128.1/24', u'interface': u'Vlan1923', u'extra_lines': [u'mtu 9216'], u'vlan_id': 1923})\r\nok: [fbs0-b505-9] => (item={u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 21, u'description': u'storage-services', u'address': u'10.21.128.5'}, u'vrf': u'STORAGE', u'address': u'10.21.128.1/24', u'interface': u'Vlan1921', u'extra_lines': [u'mtu 9216'], u'vlan_id': 1921})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1911', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 11, u'description': u'osmgmt', u'address': u'10.11.128.5'}, u'vrf': u'OSMGMT', u'vlan_id': 1911, u'address': u'10.11.128.1/24'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1931', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 31, u'description': u'metal', u'address': u'10.31.128.5'}, u'vrf': u'METAL', u'vlan_id': 1931, u'address': u'10.31.128.1/24'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1932', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 32, u'description': u'metal', u'address': u'10.32.128.5'}, u'vrf': u'METAL', u'vlan_id': 1932, u'address': u'10.32.128.1/24'})\r\nfailed: [fbs0-b505-9] (item={u'interface': u'Vlan1941', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 41, u'description': u'tunnels', u'address': u'10.41.128.5'}, u'vrf': u'TUNNEL', u'vlan_id': 1941, u'address': u'10.41.128.1/24'}) => {\"changed\": false, \"failed\": true, \"item\": {\"address\": \"10.41.128.1/24\", \"interface\": \"Vlan1941\", \"vlan_id\": 1941, \"vrf\": \"TUNNEL\", \"vrrpv3\": {\"address\": \"10.41.128.5\", \"address_family\": \"ipv4\", \"description\": \"tunnels\", \"group_id\": 41, \"priority\": \"102\"}}, \"msg\": \"interface Vlan1941\\r\\r\\n ^\\r\\n% Invalid command at '^' marker.\\r\\n\\rfbs0-b505-9# \"}\r\n```", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/32114", "commit_html_url": null, "file_loc": "{'base_commit': 'd97080174e9bbebd27a967368934ef91d1f28f64', 'files': [{'path': 'lib/ansible/module_utils/nxos.py', 'status': 'modified', 'Loc': {\"('Cli', 'run_commands', 139)\": {'add': [171]}, '(None, None, None)': {'mod': [37]}}}, {'path': 'lib/ansible/modules/network/nxos/nxos_interface.py', 'status': 'modified', 'Loc': {\"(None, 'get_interface', 238)\": {'mod': [278, 280, 281, 282, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 315, 316, 317, 318, 319, 320, 321, 322, 324, 325, 326, 327, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 340, 341, 342, 343]}, \"(None, 'get_interfaces_dict', 361)\": {'mod': [372]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/network/nxos/nxos_interface.py", "lib/ansible/module_utils/nxos.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "cc7a5228b02344658dac69c38ccb7d6580d2b4c6", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/34012", "iss_label": "module\naffects_2.4\nnet_tools\nsupport:community\nbug", "title": "nmcli module fails with self.dns4=' '.join(module.params['dns4']) TypeError", "body": "\r\n##### ISSUE TYPE\r\n\r\n - Bug Report\r\n\r\n\r\n##### COMPONENT NAME\r\n\r\n`nmcli`\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```\r\nansible 2.4.1.0\r\n config file = /Users/dlbewley/src/ansible/playbook-openshift/ansible.cfg\r\n configured module search path = [u'/Users/dlbewley/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/Cellar/ansible/2.4.1.0/libexec/lib/python2.7/site-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 2.7.14 (default, Sep 25 2017, 09:53:22) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n##### OS / ENVIRONMENT\r\n\r\n- Manager: OS X\r\n- Managed: Red Hat Enterprise Linux Server release 7.4 (Maipo)\r\n\r\n##### SUMMARY\r\n\r\nPlaybook fails when trying to join `None` value for `dns4` param [here](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/net_tools/nmcli.py#L559)\r\n\r\nI do not see a requirement to include dns servers, and expect to use DHCP.\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nHost with links on eno1, eno2. Int eno1 is def gw\r\n\r\n\r\n```yaml\r\n---\r\n- hosts: bonded\r\n\r\n# Dec 18 18:08:43 ose-prod-node-07 ansible-nmcli[31031]: Invoked with conn_name=cluster ingress=None slavepriority=32 vlandev=None forwarddelay=15 egress=None ageingtime=300 mtu=None hellotime=2 maxage=20 vlanid=None priority=128 gw4=None state=present gw6=None master=None stp=True ifname=None type=bond miimon=None arp_ip_target=None downdelay=None mac=None ip6=None ip4=None autoconnect=None dns6=None dns4=None arp_interval=None flags=None mode=802.3ad updelay=None\r\n\r\n vars:\r\n nmcli_bond:\r\n - conn_name: cluster\r\n mode: 802.3ad\r\n mtu: 9000\r\n\r\n nmcli_bond_slave:\r\n - conn_name: eno1\r\n master: cluster\r\n - conn_name: eno2\r\n master: cluster\r\n\r\n tasks:\r\n - name: create bond\r\n nmcli:\r\n type: bond\r\n conn_name: '{{ item.conn_name }}'\r\n mode: '{{ item.mode }}'\r\n state: present\r\n with_items:\r\n - '{{ nmcli_bond }}'\r\n\r\n - name: add interfaces to bond\r\n nmcli:\r\n type: bond-slave\r\n conn_name: '{{ item.conn_name }}'\r\n ifname: '{{ item.ifname }}'\r\n master: '{{ item.master }}'\r\n state: present\r\n with_items:\r\n - '{{ nmcli_bond_slave }}'\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\n\r\nFirst test, but expect playbook to run without error.\r\n\r\n##### ACTUAL RESULTS\r\n\r\n\r\n\r\n```\r\nfailed: [ose-prod-node-07.example.com] (item={u'conn_name': u'cluster', u'mode': u'802.3ad', u'mtu': 9000}) => {\r\n \"changed\": false,\r\n \"failed\": true,\r\n \"item\": {\r\n \"conn_name\": \"cluster\",\r\n \"mode\": \"802.3ad\",\r\n \"mtu\": 9000\r\n },\r\n \"module_stderr\": \"OpenSSH_7.4p1, LibreSSL 2.5.0\\r\\ndebug1: Reading configuration data /Users/dlbewley/.ssh/config\\r\\ndebug1: /Users/dlbewley/.ssh/config line 3: Applying options for *\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug1: /etc/ssh/ssh_config line 51: Applying options for *\\r\\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\\r\\ndebug1: auto-mux: Trying existing master\\r\\ndebug2: fd 3 setting O_NONBLOCK\\r\\ndebug2: mux_client_hello_exchange: master version 4\\r\\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\\r\\ndebug3: mux_client_request_session: entering\\r\\ndebug3: mux_client_request_alive: entering\\r\\ndebug3: mux_client_request_alive: done pid = 10219\\r\\ndebug3: mux_client_request_session: session request sent\\r\\ndebug1: mux_client_request_session: master session id: 2\\r\\ndebug3: mux_client_read_packet: read header failed: Broken pipe\\r\\ndebug2: Received exit status from master 1\\r\\nShared connection to ose-prod-node-07.example.com closed.\\r\\n\",\r\n \"module_stdout\": \"/tmp/ansible_gKn2an/ansible_module_nmcli.py:493: PyGIWarning: NetworkManager was imported without specifying a version first. Use gi.require_version('NetworkManager', '1.0') before import to ensure that the right version gets loaded.\\r\\n from gi.repository import NetworkManager, NMClient\\r\\n/tmp/ansible_gKn2an/ansible_module_nmcli.py:493: PyGIWarning: NMClient was imported without specifying a version first. Use gi.require_version('NMClient', '1.0') before import to ensure that the right version gets loaded.\\r\\n from gi.repository import NetworkManager, NMClient\\r\\nTraceback (most recent call last):\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 1190, in \\r\\n main()\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 1134, in main\\r\\n nmcli=Nmcli(module)\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 559, in __init__\\r\\n self.dns4=' '.join(module.params['dns4'])\\r\\nTypeError\\r\\n\",\r\n \"msg\": \"MODULE FAILURE\",\r\n \"rc\": 1\r\n}\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/30757", "commit_html_url": null, "file_loc": "{'base_commit': 'cc7a5228b02344658dac69c38ccb7d6580d2b4c6', 'files': [{'path': 'lib/ansible/modules/net_tools/nmcli.py', 'status': 'modified', 'Loc': {\"('Nmcli', '__init__', 549)\": {'mod': [559]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/net_tools/nmcli.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "5f7d39fede4de8af98472bd009c63c3a86568e2d", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2840", "iss_label": "bug", "title": "wandb: Network error (ReadTimeout), entering retry loop. See wandb\\debug-internal.log for full traceback.", "body": "\r\n - **Current repo**: yolov5-5.0 release version\r\n - **Common dataset**: VisDrone.yaml\r\n - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments\r\n\r\n\r\n## \ud83d\udc1b Bug\r\nI try to use your rep to train yolov4's NET because yolov4(https://github.com/WongKinYiu/PyTorch_YOLOv4)'s code is outdate and do not maintain, it has many bugs.\r\n when I train my own yolov4-tiny.yaml, it comes this bug, I think this bug is because my network can not connect to wandb's server? before today, I can train normally, and a few minute ago, I try many times to `python train.py `,but I still can not begin my train code.\r\n\r\n## To Reproduce (REQUIRED)\r\n \r\n`python train.py `\r\n\r\nOutput:\r\n```\r\nYOLOv5 2021-4-15 torch 1.7.1 CUDA:0 (GRID V100D-32Q, 32638.0MB)\r\n\r\nNamespace(adam=False, artifact_alias='latest', batch_size=64, bbox_interval=-1, bucket='', cache_images=False, cfg='models/yolov4-tiny.yaml', data='datai/Visdrone.yaml', device='', entity=None, epochs=300, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640, 640], label_smoothing=0.0, linear_lr=False, local_rank=-1, multi_scale=False, name='exp', noautoanchor=False, nosave=False, notest=False, project='runs/train', quad=False, rect=False, resume=False, save_dir='runs\\\\train\\\\exp8', save_period=-1, single_cls=False, sync_bn=False, total_batch_size=64, upload_dataset=False, weights='', workers=8, world_size=1)\r\ntensorboard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/\r\nhyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0\r\nwandb: Currently logged in as: zigar (use `wandb login --relogin` to force relogin)\r\nwandb: Network error (ReadTimeout), entering retry loop. See wandb\\debug-internal.log for full traceback.\r\n```\r\n\r\n\r\n## Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n\r\n## Environment\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n - OS: [e.g. WIndows 10]\r\n - GPU [e.g. GRID V100D-32Q, 32638.0MB]\r\n\r\n\r\n## Additional context\r\nAdd any other context about the problem here.\r\n", "code": null, "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2882", "commit_html_url": null, "file_loc": "{'base_commit': '5f7d39fede4de8af98472bd009c63c3a86568e2d', 'files': [{'path': 'data/argoverse_hd.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'data/coco.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'data/coco128.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'data/scripts/get_argoverse_hd.sh', 'status': 'modified', 'Loc': {'(None, None, 5)': {'mod': [5]}}}, {'path': 'data/scripts/get_coco.sh', 'status': 'modified', 'Loc': {'(None, None, 5)': {'mod': [5]}}}, {'path': 'data/scripts/get_voc.sh', 'status': 'modified', 'Loc': {'(None, None, 41)': {'add': [41]}, '(None, None, 77)': {'add': [77]}, '(None, None, 120)': {'add': [120]}, '(None, None, 5)': {'mod': [5]}, '(None, None, 32)': {'mod': [32, 33]}, '(None, None, 35)': {'mod': [35, 36]}, '(None, None, 38)': {'mod': [38]}, '(None, None, 40)': {'mod': [40]}, '(None, None, 43)': {'mod': [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]}, '(None, None, 57)': {'mod': [57, 58, 59]}, '(None, None, 68)': {'mod': [68]}, '(None, None, 72)': {'mod': [72, 73]}, '(None, None, 76)': {'mod': [76]}, '(None, None, 79)': {'mod': [79, 80, 81, 82]}, '(None, None, 84)': {'mod': [84]}, '(None, None, 93)': {'mod': [93]}, '(None, None, 95)': {'mod': [95]}, '(None, None, 97)': {'mod': [97, 98, 99, 100, 102, 103, 104]}, '(None, None, 106)': {'mod': [106]}, '(None, None, 108)': {'mod': [108, 109, 111, 112, 113, 114, 116, 117, 118, 119]}, '(None, None, 123)': {'mod': [123, 124, 126, 127, 128, 129, 131, 132, 133, 134]}}}, {'path': 'data/voc.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'utils/general.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 175]}, \"(None, 'check_dataset', 156)\": {'add': [166], 'mod': [164, 168, 169, 171]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/general.py"], "doc": [], "test": [], "config": ["data/argoverse_hd.yaml", "data/voc.yaml", "data/coco.yaml", "data/coco128.yaml"], "asset": ["data/scripts/get_argoverse_hd.sh", "data/scripts/get_voc.sh", "data/scripts/get_coco.sh"]}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "cbd55da5d24becbe3b94afaaa4cdd1187a512c3f", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2824", "iss_label": "bug", "title": " Sizes of tensors must match ", "body": "Multi Threaded Inference is not working with Yolo5. It throws the following error,\r\n\r\n```\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 113, in forward\r\n yi = self.forward_once(xi)[0] # forward\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 139, in forward_once\r\n x = m(x) # run\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 54, in forward\r\n y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy\r\nRuntimeError: The size of tensor a (68) must match the size of tensor b (56) at non-singleton dimension 3\r\nException in thread Thread-112:\r\nTraceback (most recent call last):\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/threading.py\", line 932, in _bootstrap_inner\r\n self.run()\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/threading.py\", line 870, in run\r\n self._target(*self._args, **self._kwargs)\r\n```\r\n\r\nI saw the similar bug in other issue and I used the latest version of this repo. Still the problem persists. How can I fix it?\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ultralytics/yolov5/commit/cbd55da5d24becbe3b94afaaa4cdd1187a512c3f", "file_loc": "{'base_commit': 'cbd55da5d24becbe3b94afaaa4cdd1187a512c3f', 'files': [{'path': 'models/yolo.py', 'status': 'modified', 'Loc': {\"('Detect', 'forward', 38)\": {'mod': [52]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["models/yolo.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "d9b64c27c24db2001535bb480959aca015159510", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/119", "iss_label": "question\nStale", "title": "yolov5m\u6a21\u578b\uff0c\u753142M\u589e\u5927\u523084M\uff0c\u662f\u505a\u4e86\u4ec0\u4e48\u4fee\u6539\u4e48\uff1f", "body": "6.16\u6211\u505a\u8bad\u7ec3\u7684\u65f6\u5019\uff08yolov5m\uff09\uff0c\u8bad\u7ec3\u51fa\u6765\u7684\u6a21\u578b\u5927\u5c0f\u662f42M\r\n\r\n\u4f46\u662f\u4eca\u5929\uff086.18\uff09\u6211\u7528\u6700\u65b0\u4ee3\u7801\u8bad\u7ec3\u7684\u65f6\u5019\uff0c\u6a21\u578b\u5927\u5c0f\u662f84M\r\n\r\n\u8bf7\u95ee\u662f\u505a\u4e86\u4ec0\u4e48\u4fee\u6539\u4e48\uff1f", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ultralytics/yolov5/commit/d9b64c27c24db2001535bb480959aca015159510", "file_loc": "{'base_commit': 'd9b64c27c24db2001535bb480959aca015159510', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {\"(None, 'train', 60)\": {'mod': [335]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "bfd51f62f8e0a114cb94c269e83ff135e31d8bdb", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/187", "iss_label": "bug", "title": "can't test with my finetune weights", "body": "i train a model in my custom data, can get the weights (**last.pt** and **best.pt**)\r\ni run:\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --weights weights/last.pt --device 4`\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --weights weights/best.pt --device 4`\r\nboth raise the error:\r\n**Traceback (most recent call last):\r\n File \"test.py\", line 277, in \r\n opt.verbose)\r\n File \"test.py\", line 86, in test\r\n names = model.names if hasattr(model, 'names') else model.module.names\r\n File \"/home/anaconda3/envs/yolov5/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 594, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'Model' object has no attribute 'module'**\r\n\r\nHowever, i can run with the default weight **yolov5s.pt**\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --device 4`\r\n\r\npytorch = 1.5", "code": null, "pr_html_url": "https://github.com/ultralytics/yolov5/pull/245", "commit_html_url": null, "file_loc": "{'base_commit': 'bfd51f62f8e0a114cb94c269e83ff135e31d8bdb', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {\"(None, 'train', 62)\": {'add': [135, 136, 174], 'mod': [82, 291]}, '(None, None, None)': {'mod': [375]}}}, {'path': 'utils/torch_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [56]}, \"(None, 'model_info', 101)\": {'mod': [114, 115]}, \"('ModelEMA', 'update', 184)\": {'mod': [188]}, \"('ModelEMA', 'update_attr', 198)\": {'mod': [199, 200, 201, 202]}}}, {'path': 'utils/utils.py', 'status': 'modified', 'Loc': {\"(None, 'check_img_size', 48)\": {'mod': [50]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py", "utils/utils.py", "utils/torch_utils.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/227", "iss_label": "", "title": "Short text samples", "body": "It would be awesome to be able to use this to help train a hot word detector. In addition to recording myself saying the hotword, I could create an even larger dataset by adding outputs of this model that used my voice as the reference.\r\n\r\nThe problem with that, however, is that this model seems to only work well on sentences of medium length (+- 20 words according to demo_cli.py). Is there anything I can do to make short text samples (e.g. 2 words) sound better?", "code": null, "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472", "commit_html_url": null, "file_loc": "{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 18)': {'mod': [18]}, '(None, None, 23)': {'mod': [23, 24]}, '(None, None, 65)': {'mod': [65, 66, 68, 70]}}}, {'path': 'demo_cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13, 43, 162], 'mod': [24, 25, 26, 30, 31, 32, 70, 76]}}}, {'path': 'demo_toolbox.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 32], 'mod': [23, 24, 25]}}}, {'path': 'encoder/audio.py', 'status': 'modified', 'Loc': {\"(None, 'preprocess_wav', 19)\": {'mod': [20, 43, 44]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 16)': {'add': [16]}, '(None, None, 1)': {'mod': [1]}}}, {'path': 'requirements_gpu.txt', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/LICENSE.txt', 'status': 'modified', 'Loc': {'(None, None, 3)': {'add': [3]}, '(None, None, 4)': {'add': [4]}}}, {'path': 'synthesizer/audio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}}}, {'path': 'synthesizer/feeder.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/hparams.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [348], 'mod': [1, 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 121, 122, 123, 124, 125, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 149, 150, 151, 152, 153, 154, 155, 157, 158, 159, 160, 161, 162, 164, 165, 166, 167, 168, 169, 170, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 190, 191, 192, 193, 194, 196, 197, 198, 199, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 231, 232, 233, 234, 235, 237, 238, 239, 240, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 342, 343, 344, 345, 347]}, \"(None, 'hparams_debug_string', 350)\": {'mod': [351, 352, 353]}}}, {'path': 'synthesizer/inference.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6], 'mod': [1, 2, 3, 4, 5, 9, 11]}, \"('Synthesizer', '__init__', 19)\": {'add': [33], 'mod': [21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59]}, \"('Synthesizer', 'griffin_lim', 149)\": {'add': [154]}, \"('Synthesizer', None, 15)\": {'mod': [19, 106, 107, 108, 109, 110, 111, 113, 114, 116, 117, 118, 119, 121]}, \"('Synthesizer', 'is_loaded', 61)\": {'mod': [63]}, \"('Synthesizer', 'load', 67)\": {'mod': [69, 70, 71, 72, 73, 74, 75]}, \"('Synthesizer', 'synthesize_spectrograms', 77)\": {'mod': [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 104]}}}, {'path': 'synthesizer/infolog.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/__init__.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/architecture_wrappers.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/attention.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/custom_decoder.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/helpers.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/modules.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/tacotron.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11], 'mod': [1, 2, 3, 4, 5, 6, 7, 8, 9]}, \"(None, 'split_func', 14)\": {'mod': [14, 15, 16, 17, 18, 19, 20, 21, 24, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 79, 81, 82, 84, 86, 87, 88, 89, 90, 91, 93, 94, 95, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 134, 135, 136, 137, 139, 140, 141, 142, 143, 145, 147, 148, 151, 153, 154, 155, 156, 157, 158, 160, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 198, 199, 200, 201, 202, 203, 205, 206, 207, 209, 210, 212, 213, 214, 215, 216, 217, 218, 220, 221, 222, 223, 225, 226, 228, 229, 230, 232, 233, 234, 235, 237, 238, 240, 241, 242, 243, 244, 245, 246, 247, 249, 250, 252, 253, 254, 256, 257, 259, 260, 261, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 277, 278, 279, 280, 281, 282, 283, 284, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 312, 313, 314, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 330, 331, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 369, 370, 371, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 385, 386, 387, 388, 389, 390, 391, 392, 394, 395, 396, 397, 398, 399, 400, 402, 403, 404, 405, 406, 407, 409, 410, 412, 413, 414, 415, 416, 417, 418, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 435, 436, 437, 439, 441, 442, 443, 444, 445, 446, 447, 448, 449, 451, 452, 454, 455, 456, 457, 458, 459, 460, 461, 462, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 479, 480, 481, 483, 484, 485, 486, 487, 488, 489, 491, 492, 493, 494, 495, 497, 498, 499, 501, 502, 504, 505, 507, 508, 509, 510, 512, 513, 514, 515, 516, 517, 518, 520, 521]}}}, {'path': 'synthesizer/preprocess.py', 'status': 'modified', 'Loc': {\"(None, 'process_utterance', 185)\": {'add': [204]}}}, {'path': 'synthesizer/synthesize.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82], 'mod': [1, 3, 4, 6, 7]}, \"(None, 'run_eval', 10)\": {'mod': [10, 11, 12, 14, 15, 16, 17, 18, 20, 21, 23, 24, 25, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37]}, \"(None, 'run_synthesis', 39)\": {'mod': [40, 41, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 62, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 77, 78, 80, 81]}}}, {'path': 'synthesizer/tacotron2.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 79, 83], 'mod': [3, 4, 5, 6, 7, 9, 10, 12, 14, 16, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 31, 32, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78]}, \"(None, 'model_train_mode', 85)\": {'mod': [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 133, 134, 135, 136, 138, 139, 141, 142, 143, 144, 146, 147, 148, 149]}, \"(None, 'train', 110)\": {'mod': [151, 152, 153, 154, 155, 156, 157, 159, 161, 167, 169, 171, 172, 173, 174, 176, 177, 178, 179, 181, 183, 184, 185, 186, 187, 189, 190, 191, 192, 194, 195, 196, 198, 199, 201, 202, 204, 205, 207, 208, 210, 212, 213, 214, 215, 216, 218, 219, 220, 222, 223, 224, 226, 227, 228, 230, 231, 232, 233, 234, 235, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 327, 328, 329, 330, 332, 333, 334, 335, 336, 337, 338, 339, 341, 342, 343, 344, 346, 347, 348, 349, 350, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 374, 375, 376, 377, 378, 379, 381, 382, 383, 385, 386, 387, 388, 391, 392]}}}, {'path': 'synthesizer/utils/__init__.py', 'status': 'modified', 'Loc': {\"('ValueWindow', None, 1)\": {'add': [0]}}}, {'path': 'synthesizer_train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 4, 6, 9, 10, 11, 12, 13, 14, 15, 16, 21, 22, 23, 24, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 53, 55]}}}, {'path': 'toolbox/__init__.py', 'status': 'modified', 'Loc': {\"('Toolbox', 'init_encoder', 325)\": {'add': [333]}, \"('Toolbox', None, 42)\": {'mod': [43]}, \"('Toolbox', '__init__', 43)\": {'mod': [54]}, \"('Toolbox', 'synthesize', 207)\": {'mod': [211, 212, 213, 214, 215, 216, 217, 221, 224, 228]}, \"('Toolbox', 'vocode', 237)\": {'mod': [243]}}}, {'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [41]}, \"('UI', None, 53)\": {'mod': [331]}, \"('UI', 'populate_models', 338)\": {'mod': [347, 348, 349, 350, 351, 352, 353]}}}, {'path': 'vocoder_preprocess.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32, 40], 'mod': [20]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/models/modules.py", "synthesizer/models/tacotron.py", "synthesizer/train.py", "synthesizer/models/attention.py", "synthesizer_train.py", "demo_cli.py", "toolbox/__init__.py", "demo_toolbox.py", "synthesizer/models/architecture_wrappers.py", "synthesizer/audio.py", "synthesizer/preprocess.py", "synthesizer/tacotron2.py", "synthesizer/hparams.py", "synthesizer/utils/__init__.py", "synthesizer/synthesize.py", "toolbox/ui.py", "encoder/audio.py", "synthesizer/feeder.py", "synthesizer/models/helpers.py", "synthesizer/models/__init__.py", "synthesizer/inference.py", "vocoder_preprocess.py", "synthesizer/models/custom_decoder.py", "synthesizer/infolog.py"], "doc": ["synthesizer/LICENSE.txt", "README.md"], "test": [], "config": ["requirements_gpu.txt", "requirements.txt"], "asset": []}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "f108782e30369dedfc66f22d21c2b72c77941de7", "is_iss": 0, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5050", "iss_label": "bug", "title": "[Bug]: img2img sampler is not changing", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nI'm trying to choose another sampler, but it is not working.\r\n\r\nI tried checking the p value, and found sampler_name = None\r\nThere seems to be a code missing to assign the variable sampler_name in the img2img\r\n\r\ntxt2img seems working fine, though.\n\n### Steps to reproduce the problem\n\nChange the sampler and see the results. They are all the same.\n\n### What should have happened?\n\nDifferent samplers should produce different results.\n\n### Commit where the problem happens\n\n828438b\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Command Line Arguments\n\n_No response_\n\n### Additional information, context and logs\n\n_No response_", "code": null, "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4910", "commit_html_url": null, "file_loc": "{'base_commit': 'f108782e30369dedfc66f22d21c2b72c77941de7', 'files': [{'path': 'scripts/xy_grid.py', 'status': 'modified', 'Loc': {\"(None, 'confirm_samplers', 71)\": {'add': [74]}, \"('Script', 'process_axis', 276)\": {'add': [279]}}}, {'path': 'img2img.py', 'Loc': {}}, {'path': 'Line 102: sampler_index=sd_samplers.samplers_for_img2img[sampler_index].name', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["img2img.py", "scripts/xy_grid.py"], "doc": [], "test": [], "config": [], "asset": ["Line 102: sampler_index=sd_samplers.samplers_for_img2img[sampler_index].name"]}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "d9499f4301018ebd2977685d098381aa4111d2ae", "is_iss": 0, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13724", "iss_label": "enhancement", "title": "[Feature Request]: Sort items by date by default", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What would your feature do ?\n\nI hope to use time sorting by default when opening additional interfaces, so that I can immediately try the new model I just downloaded.\n\n### Proposed workflow\n\n1. Go to .... \r\n2. Press ....\r\n3. ...\r\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/d9499f4301018ebd2977685d098381aa4111d2ae", "file_loc": "{'base_commit': 'd9499f4301018ebd2977685d098381aa4111d2ae', 'files': [{'path': 'javascript/extraNetworks.js', 'status': 'modified', 'Loc': {\"(None, 'setupExtraNetworksForTab', 18)\": {'add': [51, 54, 98, 99], 'mod': [30, 56, 57, 58, 59, 65, 91, 92, 93, 94]}, '(None, None, None)': {'add': [115]}, \"(None, 'applyExtraNetworkSort', 116)\": {'add': [116]}}}, {'path': 'modules/shared_options.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [236]}}}, {'path': 'modules/ui_extra_networks.py', 'status': 'modified', 'Loc': {\"(None, 'create_ui', 357)\": {'add': [397], 'mod': [384, 385]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/shared_options.py", "modules/ui_extra_networks.py", "javascript/extraNetworks.js"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "22bcc7be428c94e9408f589966c2040187245d81", "is_iss": 0, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9102", "iss_label": "bug-report", "title": "[Bug]: Model Dropdown Select on Firefox is obscured by svelte pre-loader", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nSeems the latest pull request has added pre-loaders and I noticed that the model dropdown is constantly loading and therefore obscuring the dropdown from the user. This is only happening in Firefox, Chrome for example is fine.\r\n\r\n```\r\n.wrap.default.svelte-gjihhp {\r\ninset: 0;\r\n}\r\n```\r\n\r\nI just set it to `display: none` to access it\n\n### Steps to reproduce the problem\n\nLoad up in Firefox and try to change the model\r\n\r\nSee attached screenshot\r\n![Screenshot_1](https://user-images.githubusercontent.com/3169931/228311409-22be3832-0348-424c-9298-08e76cb166a7.jpg)\r\n\r\n\n\n### What should have happened?\n\nHave access to the model dropdown select\n\n### Commit where the problem happens\n\nf1db987\n\n### What platforms do you use to access the UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Command Line Arguments\n\n```Shell\nNo\n```\n\n\n### List of extensions\n\nNo\n\n### Console logs\n\n```Shell\nUncaught (in promise) TypeError: q[R[H]] is undefined\r\n ct http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\r\n ct http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\r\n ze http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\n```\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/joysun545/stable-diffusion-webui/commit/22bcc7be428c94e9408f589966c2040187245d81", "file_loc": "{'base_commit': '22bcc7be428c94e9408f589966c2040187245d81', 'files': [{'path': 'modules/ui.py', 'status': 'modified', 'Loc': {\"(None, 'create_ui', 437)\": {'add': [1630]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "a8e3336a850e856188350a93e67d77c07c85b8af", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/2008", "iss_label": "wontfix", "title": "Expand run_lm_finetuning.py to all models", "body": "## \ud83d\ude80 Feature\r\n\r\n[run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/b0ee7c7df3d49a819c4d6cef977214bd91f5c075/examples/run_lm_finetuning.py) is a very useful tool for finetuning many models the library provided. But it doesn't cover all the models. Currently available models are:\r\n\r\n- gpt2\r\n- openai-gpt\r\n- bert\r\n- roberta\r\n- distilbert\r\n- camembert\r\n\r\nAnd not available ones:\r\n\r\n- ctrl\r\n- xlm\r\n- xlnet\r\n- transfo-xl\r\n- albert\r\n\r\n## Motivation\r\n\r\nMost important part of such a library is that it can be easily finetuned. `run_lm_finetuning.py` gives us that opportunity but why say no more :)\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/huggingface/transformers/commit/3dcb748e31be8c7c9e4f62926c5c144c62d07218\n\nhttps://github.com/huggingface/transformers/commit/a8e3336a850e856188350a93e67d77c07c85b8af", "file_loc": "{'base_commit': 'a8e3336a850e856188350a93e67d77c07c85b8af', 'files': [{'path': 'examples/ner/run_ner.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33], 'mod': [41]}}}, {'path': 'examples/ner/run_tf_ner.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [16, 17, 18, 19, 21, 22, 23, 24, 25, 37, 38, 39, 41, 42, 43, 44, 45, 52]}, \"(None, 'main', 457)\": {'mod': [512, 513, 523, 530, 565, 587, 614, 615]}}}, {'path': 'examples/run_glue.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32], 'mod': [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 676, 677, 683, 695]}, \"(None, 'train', 69)\": {'mod': [75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101]}, \"(None, 'main', 386)\": {'mod': [445, 625, 626, 632, 637]}}}, {'path': 'examples/run_language_modeling.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [40], 'mod': [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 60, 61, 62, 789]}, \"('TextDataset', '__init__', 68)\": {'mod': [76, 77, 78, 79, 80, 81, 82, 83]}, \"(None, 'main', 464)\": {'mod': [696, 699, 701, 703, 706, 708, 712, 722, 730, 771, 772]}}}, {'path': 'examples/run_squad.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32], 'mod': [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 75, 76, 77, 78, 79, 80, 81, 845]}, \"(None, 'train', 76)\": {'mod': [83, 84, 85, 86, 87, 88, 89, 90, 91]}, \"(None, 'main', 477)\": {'mod': [516, 760, 761, 765, 770, 820, 821]}}}, {'path': 'src/transformers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [160, 319]}}}, {'path': 'templates/adding_a_new_example_script/run_xxx.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [30], 'mod': [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 75, 76, 77, 78, 79, 80, 709]}, \"(None, 'set_seed', 69)\": {'mod': [71, 72, 73]}, \"(None, 'main', 388)\": {'mod': [421, 629, 630, 634, 639, 690, 691]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["examples/run_squad.py", "templates/adding_a_new_example_script/run_xxx.py", "examples/run_glue.py", "src/transformers/__init__.py", "examples/ner/run_tf_ner.py", "examples/run_language_modeling.py", "examples/ner/run_ner.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "88d7f96e33c3f3e541bcdd913f2ff1e50aa18c1b", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/5212", "iss_label": "", "title": "BartConfig wrong decoder_start_token_id?", "body": "# \ud83d\udc1b Bug\r\n\r\n## Information\r\n\r\nModel I am using (Bert, XLNet ...): Bart\r\n\r\nLanguage I am using the model on (English, Chinese ...): English\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```\r\nfrom transformers import BartConfig, BartTokenizer\r\nconfig = BartConfig.from_pretrained('facebook/bart-large')\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large')\r\nconfig.decoder_start_token_id\r\n>>> 2\r\ntokenizer.bos_token_id\r\n>>> 0 # != config.decoder_start_token_id\r\ntokenizer.eos_token_id\r\n>>> 2\r\n```\r\n\r\nIt is misleading in the documentation of the function ```generate````\r\n\r\n*decoder_start_token_id=None \u2013 (optional) int If an encoder-decoder model starts decoding with a different token than BOS. Defaults to None and is changed to BOS later.*\r\n\r\n\r\n## Expected behavior\r\n\r\nI expect that decoder_start_token_id = tokenizer.bos_token_id, but maybe the model is designed to start decoding with EOS token.\r\n\r\n", "code": null, "pr_html_url": "https://github.com/huggingface/transformers/pull/5306", "commit_html_url": null, "file_loc": "{'base_commit': '88d7f96e33c3f3e541bcdd913f2ff1e50aa18c1b', 'files': [{'path': 'src/transformers/modeling_tf_utils.py', 'status': 'modified', 'Loc': {\"('TFPreTrainedModel', 'generate', 551)\": {'mod': [645, 646]}}}, {'path': 'src/transformers/modeling_utils.py', 'status': 'modified', 'Loc': {\"('PreTrainedModel', 'generate', 871)\": {'mod': [965, 966]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\n\u91cc\u7684docstring"}, "loctype": {"code": ["src/transformers/modeling_utils.py", "src/transformers/modeling_tf_utils.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "ba9e3fe6267ea81a3d546b7aa5bcf0122f365e51", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1153", "iss_label": "bug", "title": "mermaid: Generating ..seq_flow/20240402032019.svg.. Error: Failed to launch the browser process!", "body": "**Bug description**\r\nGenerating /app/metagpt/workspace/jqdlhwap/resources/seq_flow/20240402032019.svg.. Error: Failed to launch the browser process!\r\n\r\n**Bug solved method**\r\n\r\n\r\n**Environment information**\r\n\"docker compose up -d\" after clone\r\nalready run \"npm install -g @mermaid-js/mermaid-cli\":\r\n\r\nroot@84a2e77496b0:/app/metagpt# mmdc -h\r\nUsage: mmdc [options]\r\n\r\nOptions:\r\n -V, --version output the version number\r\n -t, --theme [theme] Theme of the chart (choices: \"default\", \"forest\", \"dark\", \"neutral\", default: \"default\")\r\n -w, --width [width] Width of the page (default: 800)\r\n -H, --height [height] Height of the page (default: 600)\r\n -i, --input Input mermaid file. Files ending in .md will be treated as Markdown and all charts (e.g. ```mermaid (...)```) will be extracted and generated.\r\n Use `-` to read from stdin.\r\n -o, --output [output] Output file. It should be either md, svg, png or pdf. Optional. Default: input + \".svg\"\r\n -e, --outputFormat [format] Output format for the generated image. (choices: \"svg\", \"png\", \"pdf\", default: Loaded from the output file extension)\r\n -b, --backgroundColor [backgroundColor] Background color for pngs/svgs (not pdfs). Example: transparent, red, '#F0F0F0'. (default: \"white\")\r\n -c, --configFile [configFile] JSON configuration file for mermaid.\r\n -C, --cssFile [cssFile] CSS file for the page.\r\n -s, --scale [scale] Puppeteer scale factor (default: 1)\r\n -f, --pdfFit Scale PDF to fit chart\r\n -q, --quiet Suppress log output\r\n -p --puppeteerConfigFile [puppeteerConfigFile] JSON configuration file for puppeteer.\r\n -h, --help display help for command\r\n\r\n- LLM type and model name: zhipu-api / GLM-4\r\n- System version:\r\n- Python version:\r\n- MetaGPT version or branch: main\r\n\r\n`run in docker`\r\n\r\n- packages version:\r\n- installation method: \r\n\r\n**Screenshots or logs**\r\n2024-04-02 03:20:46.126 | INFO | metagpt.utils.mermaid:mermaid_to_file:48 - Generating /app/metagpt/workspace/jqdlhwap/resources/seq_flow/20240402032019.svg..\r\n2024-04-02 03:20:46.460 | WARNING | metagpt.utils.mermaid:mermaid_to_file:74 - \r\nError: Failed to launch the browser process!\r\n[0402/032046.449080:ERROR:zygote_host_impl_linux.cc(100)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.\r\n\r\n\r\nTROUBLESHOOTING: https://pptr.dev/troubleshooting\r\n\r\n at Interface.onClose (file:///usr/lib/node_modules/@mermaid-js/mermaid-cli/node_modules/@puppeteer/browsers/lib/esm/launch.js:253:24)\r\n at Interface.emit (node:events:524:35)\r\n at Interface.close (node:internal/readline/interface:526:10)\r\n at Socket.onend (node:internal/readline/interface:252:10)\r\n at Socket.emit (node:events:524:35)\r\n at endReadableNT (node:internal/streams/readable:1378:12)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)\r\n", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1155", "commit_html_url": null, "file_loc": "{'base_commit': 'ba9e3fe6267ea81a3d546b7aa5bcf0122f365e51', 'files': [{'path': 'metagpt/configs/mermaid_config.py', 'status': 'modified', 'Loc': {\"('MermaidConfig', None, 13)\": {'mod': [16]}}}, {'path': 'metagpt/utils/mermaid.py', 'status': 'modified', 'Loc': {\"(None, 'mermaid_to_file', 17)\": {'add': [83]}}}, {'path': 'config/config2.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/configs/mermaid_config.py", "metagpt/utils/mermaid.py"], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "c779f6977ecbdba075d7c81519edd5eaf6de2d0e", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1197", "iss_label": "", "title": "Support for Cohere API ", "body": "Please add support for Cohere API with all the built in RAG and tool use functionalities. Essentially, RAG and tool use in Cohere are just chat parameters definable by users. More information can be found at https://docs.cohere.com/reference/chat .", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1193", "commit_html_url": null, "file_loc": "{'base_commit': 'c779f6977ecbdba075d7c81519edd5eaf6de2d0e', 'files': [{'path': 'metagpt/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [53]}}}, {'path': 'metagpt/rag/factories/ranker.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}, \"('RankerFactory', '__init__', 20)\": {'add': [24]}, \"('RankerFactory', None, 17)\": {'add': [47]}}}, {'path': 'metagpt/rag/schema.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [121]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [42]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/rag/schema.py", "metagpt/rag/factories/ranker.py", "setup.py", "metagpt/const.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "b5bb4d7e63e72c3d118e449a3763c1ff4411f159", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1547", "iss_label": "", "title": "ERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html", "body": "**Bug description**\r\nI installed playwright and its chrominum with the guidance and made configuration of mermaid. But it seems that the mermaid didn't work normally. \r\n```\r\nERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\r\nCall log:\r\nnavigating to \"file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\", waiting until \"load\"\r\n```\r\n\r\n**Bug solved method**\r\nI think it may do with the configuration but the document of this part is not so clear. I don't know how to fill in the \"path\" if I'm using playwright and whether it is right to keep that stuff in default.\r\n\r\nThis is my configuration:\r\n```yaml\r\nllm:\r\n api_type: 'openai' # or azure / ollama / groq etc. Check LLMType for more options\r\n api_key: '[MY_API_KEY]' # MY_API_KEY\r\n model: 'yi-lightning' # or gpt-3.5-turbo\r\n base_url: 'https://api.lingyiwanwu.com/v1' # or any forward url.\r\n # proxy: 'YOUR_LLM_PROXY_IF_NEEDED' # Optional. If you want to use a proxy, set it here.\r\n # pricing_plan: 'YOUR_PRICING_PLAN' # Optional. If your pricing plan uses a different name than the `model`.\r\n\r\nmermaid:\r\n engine: 'playwright' # nodejs/ink/playwright/pyppeteer\r\n # path: 'mmdc' # such as './node_modules/.bin/mmdc'\r\n # puppeteer_config: './config/puppeteer-config' # only for nodejs\r\n # pyppeteer_path: '/usr/bin/google-chrome-stable' # only for pyppeteer\r\n```\r\n\r\nThis is your example:\r\n```yaml\r\nmermaid:\r\n engine: 'nodejs' # nodejs/ink/playwright/pyppeteer\r\n path: 'mmdc' # such as './node_modules/.bin/mmdc'\r\n puppeteer_config: './config/puppeteer-config' # only for nodejs\r\n pyppeteer_path: '/usr/bin/google-chrome-stable' # only for pyppeteer\r\n```\r\n\r\n**Environment information**\r\n- LLM type and model name:\r\n- System version: ubuntu 22.04\r\n- Python version: 3.11\r\n- MetaGPT version or branch: 0.8\r\n\r\n\r\n\r\n- packages version:\r\n- installation method: pip\r\n\r\n**Screenshots or logs**\r\n2024-10-29 16:22:16.900 | WARNING | metagpt.utils.cost_manager:update_cost:49 - Model yi-lightning not found in TOKEN_COSTS.\r\n2024-10-29 16:22:16.903 | INFO | metagpt.utils.git_repository:rename_root:219 - Rename directory /root/workspace/cited_papaer_eval/workspace/20241029162137 to /root/workspace/cited_papaer_eval/workspace/scholar_cited_evaluation\r\n2024-10-29 16:22:16.904 | INFO | metagpt.utils.file_repository:save:57 - save to: /root/workspace/cited_papaer_eval/workspace/scholar_cited_evaluation/docs/prd/20241029162216.json\r\n2024-10-29 16:22:17.435 | ERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\r\nCall log:\r\nnavigating to \"file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\", waiting until \"load\"\r\n", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1564", "commit_html_url": null, "file_loc": "{'base_commit': 'b5bb4d7e63e72c3d118e449a3763c1ff4411f159', 'files': [{'path': 'metagpt/utils/mmdc_playwright.py', 'status': 'modified', 'Loc': {\"(None, 'mermaid_to_file', 17)\": {'mod': [84, 85, 86, 87]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/utils/mmdc_playwright.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7276f699fc85c611f1c3f83a19a368da9841e3a4", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/2892", "iss_label": "question", "title": "Flow as tool Usage", "body": "### Discussed in https://github.com/langflow-ai/langflow/discussions/2891\r\n\r\n
    \r\n\r\nOriginally posted by **pavansandeep2910** July 23, 2024\r\nI cannot understand how to load files to see them in flow as tool component. can anyone help me direct to flow as tool usage?
    ", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3093", "commit_html_url": null, "file_loc": "{'base_commit': '7276f699fc85c611f1c3f83a19a368da9841e3a4', 'files': [{'path': 'src/backend/base/langflow/components/prototypes/SubFlow.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [4, 6, 9, 10, 11, 12]}, \"('SubFlowComponent', 'build', 98)\": {'add': [103], 'mod': [102, 105, 108, 112, 114, 115]}, \"('SubFlowComponent', None, 15)\": {'mod': [15, 17, 18, 19, 22, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100]}, \"('SubFlowComponent', 'get_flow_names', 24)\": {'mod': [25, 26]}, \"('SubFlowComponent', 'update_build_config', 35)\": {'mod': [36, 39, 41]}, \"('SubFlowComponent', 'add_inputs_to_build_config', 58)\": {'mod': [71]}}}, {'path': 'src/backend/base/langflow/initial_setup/setup.py', 'status': 'modified', 'Loc': {\"(None, 'load_starter_projects', 342)\": {'mod': [346]}}}, {'path': 'src/backend/base/langflow/inputs/inputs.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [231]}, \"('SecretStrInput', None, 215)\": {'mod': [229]}}}, {'path': 'src/frontend/src/CustomNodes/GenericNode/components/handleRenderComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 1)': {'mod': [1]}, '(None, None, 4)': {'mod': [4]}, '(None, None, 40)': {'mod': [40, 41, 42, 43]}, '(None, None, 103)': {'mod': [103]}}}, {'path': 'src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 352)': {'mod': [352, 353]}, '(None, None, 365)': {'mod': [365, 366]}}}, {'path': 'src/frontend/src/CustomNodes/hooks/use-handle-new-value.tsx', 'status': 'modified', 'Loc': {'(None, None, 66)': {'mod': [66, 67]}}}, {'path': 'src/frontend/src/components/parameterRenderComponent/component/refreshParameterComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 32)': {'mod': [32]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/components/prototypes/SubFlow.py", "src/backend/base/langflow/initial_setup/setup.py", "src/backend/base/langflow/inputs/inputs.py", "src/frontend/src/components/parameterRenderComponent/component/refreshParameterComponent/index.tsx", "src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx", "src/frontend/src/CustomNodes/GenericNode/components/handleRenderComponent/index.tsx", "src/frontend/src/CustomNodes/hooks/use-handle-new-value.tsx"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "b3b5290598f5970fd6a1a092fe4d11211008a04d", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/5378", "iss_label": "bug", "title": "URL component lost imported urls in tool mode when refresh UI", "body": "### Bug Description\n\nI have this issue when i build multi agent flow like import json below\r\n- URL Component Before refresh UI:\r\n![image](https://github.com/user-attachments/assets/f361501d-3d11-4e4a-b560-203faf8a4935)\r\n![image](https://github.com/user-attachments/assets/4ee6c93f-914b-4efa-b0eb-5a13f5c404fa)\r\n\r\nAfter refresh UI:\r\n![image](https://github.com/user-attachments/assets/2224a76e-1dd7-42d4-8531-a5e86719653c)\r\n![image](https://github.com/user-attachments/assets/3753754f-3a16-4bfc-a82e-1286c7504fe3)\r\n\r\n\r\nI have check in normal mode of URL Component, and this bug does not appear.\n\n### Reproduction\n\n1. Create flow\r\n2. Add URL component, change to Tool mode\r\n3. Input Urls\r\n4. Save flow\r\n5. Reload UI (Press F5)\n\n### Expected behavior\n\nURL component has keep Urls has input. In Tool mode\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nWindows 11/Docker\n\n### Langflow Version\n\nv1.1.1\n\n### Python Version\n\nNone\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n[Simple Agent (bug).json](https://github.com/user-attachments/files/18206463/Simple.Agent.bug.json)\r\n", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/5316", "commit_html_url": null, "file_loc": "{'base_commit': 'b3b5290598f5970fd6a1a092fe4d11211008a04d', 'files': [{'path': 'src/frontend/src/components/core/parameterRenderComponent/components/toggleShadComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 39)': {'mod': [39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]}}}, {'path': 'src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 5)': {'add': [5]}, '(None, None, 44)': {'add': [44]}, '(None, None, 79)': {'add': [79]}, '(None, None, 98)': {'add': [98]}, '(None, None, 149)': {'add': [149]}, '(None, None, 11)': {'mod': [11]}, '(None, None, 23)': {'mod': [23, 24, 25]}, '(None, None, 145)': {'mod': [145, 146]}, '(None, None, 158)': {'mod': [158, 159]}, '(None, None, 174)': {'mod': [174]}, '(None, None, 291)': {'mod': [291]}, '(None, None, 403)': {'mod': [403, 404, 405, 407, 408, 409]}, '(None, None, 421)': {'mod': [421, 422, 423, 424, 425, 426]}, '(None, None, 471)': {'mod': [471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx", "src/frontend/src/components/core/parameterRenderComponent/components/toggleShadComponent/index.tsx"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "a4dc5381b2cf31c507cc32f9027f76bf00d61ccc", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3536", "iss_label": "bug", "title": "Prompt component does not pass variables correctly", "body": "### Bug Description\n\nI have prompt with two variables. \r\n{image_url} Value of image is https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\n{post_id} Value of Post ID is 11620\r\n\r\nPrompt is \r\nTask: Upload the image from the provided URL to WordPress and set it as the featured image for the specified post.\r\nImage URL: {image_url}\r\nPost ID: {post_id}\r\n\r\nThis worked on 1.0.15, but after I upgraded to 1.0.16 the second variable is not passed on, it repeats the image one.\r\n\r\nComponent output\r\nTask: Upload the image from the provided URL to WordPress and set it as the featured image for the specified post.\r\nImage URL: https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\nPost ID: https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\n\n\n### Reproduction\n\nCreate prompt with multiple variables.\r\nAdd value in each.\r\nTry to build prompt and you will find that only first variable is being pulled.\r\n![image](https://github.com/user-attachments/assets/cc478377-6c3d-4a71-ba13-ab6e1773413e)\r\n\r\n\r\n![image](https://github.com/user-attachments/assets/fad07891-e9d1-474f-b7a9-efb3084c4caf)\r\n\n\n### Expected behavior\n\nmultiple variables should work\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nRender\n\n### Langflow Version\n\n1.0.16\n\n### Python Version\n\n3.12\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3698", "commit_html_url": null, "file_loc": "{'base_commit': 'a4dc5381b2cf31c507cc32f9027f76bf00d61ccc', 'files': [{'path': 'src/backend/base/langflow/custom/custom_component/component.py', 'status': 'modified', 'Loc': {\"('Component', '__init__', 42)\": {'add': [71], 'mod': [45, 56]}, \"('Component', '_reset_all_output_values', 88)\": {'mod': [89, 90]}, \"('Component', '_build_state_model', 92)\": {'mod': [98]}, \"('Component', '__deepcopy__', 112)\": {'mod': [119]}, \"('Component', 'list_outputs', 166)\": {'mod': [170]}, \"('Component', 'get_output', 210)\": {'mod': [223, 224]}, \"('Component', 'set_output_value', 234)\": {'mod': [235, 236]}, \"('Component', 'map_outputs', 240)\": {'mod': [253, 257]}, \"('Component', 'map_inputs', 259)\": {'mod': [270]}, \"('Component', '_set_output_types', 290)\": {'mod': [291]}, \"('Component', 'get_output_by_method', 296)\": {'mod': [299]}, \"('Component', '_find_matching_output_method', 327)\": {'mod': [329]}, \"('Component', '__getattr__', 440)\": {'mod': [445, 446]}, \"('Component', '_set_outputs', 577)\": {'mod': [581]}, \"('Component', '_build_results', 619)\": {'mod': [623]}}}, {'path': 'src/backend/base/langflow/graph/graph/base.py', 'status': 'modified', 'Loc': {\"('Graph', '__apply_config', 318)\": {'mod': [322]}}}, {'path': 'src/backend/base/langflow/template/field/base.py', 'status': 'modified', 'Loc': {\"('Output', None, 161)\": {'add': [180]}}}, {'path': 'src/backend/tests/unit/test_custom_component.py', 'status': 'modified', 'Loc': {\"(None, 'test_custom_component_get_function_entrypoint_args_no_args', 397)\": {'add': [402]}}}, {'path': 'src/backend/tests/unit/test_database.py', 'status': 'modified', 'Loc': {\"(None, 'test_read_flow', 76)\": {'mod': [76, 79]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/template/field/base.py", "src/backend/base/langflow/graph/graph/base.py", "src/backend/base/langflow/custom/custom_component/component.py"], "doc": [], "test": ["src/backend/tests/unit/test_custom_component.py", "src/backend/tests/unit/test_database.py"], "config": [], "asset": []}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "20ceb42504087c712aaee41bfc17a870ae0109d4", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/2039", "iss_label": "enhancement", "title": "[Feat] Firecrawl\ud83d\udd25 Integration ", "body": "Hi all,\r\n\r\nOpening this issue after chatting with Rodrigo. It would be awesome to add a [Firecrawl](https://firecrawl.dev) web loader / tool for people to use it to scrape, crawl and extract LLM ready data from the web.\r\n\r\nWould love to hear your thoughts on how we can best integrate it.\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/2359", "commit_html_url": null, "file_loc": "{'base_commit': '20ceb42504087c712aaee41bfc17a870ae0109d4', 'files': [{'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, 2114)': {'add': [2114]}, '(None, None, 2436)': {'add': [2436]}, '(None, None, 2596)': {'add': [2596]}, '(None, None, 2600)': {'add': [2600]}, '(None, None, 4618)': {'add': [4618]}, '(None, None, 6082)': {'add': [6082]}, '(None, None, 2435)': {'mod': [2435]}, '(None, None, 2595)': {'mod': [2595]}, '(None, None, 2599)': {'mod': [2599]}, '(None, None, 4617)': {'mod': [4617]}, '(None, None, 6085)': {'mod': [6085]}, '(None, None, 10555)': {'mod': [10555]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, 94)': {'add': [94]}}}, {'path': 'src/backend/base/poetry.lock', 'status': 'modified', 'Loc': {'(None, None, 741)': {'add': [741]}, '(None, None, 3238)': {'mod': [3238]}}}, {'path': 'src/backend/base/pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, 66)': {'add': [66]}}}, {'path': 'src/frontend/src/utils/styleUtils.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [173, 365]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/utils/styleUtils.ts"], "doc": [], "test": [], "config": ["pyproject.toml", "poetry.lock", "src/backend/base/poetry.lock", "src/backend/base/pyproject.toml"], "asset": []}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "9d8009f2f5c5e3fd3bf47760debc787deb454b1a", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3004", "iss_label": "bug", "title": "Problem with Global Variables Setting Page", "body": "### Bug Description\r\n\r\nWhen entering http://127.0.0.1:7860/settings/global-variables\r\n\r\nI am getting error in JS console.\r\n```\r\n`DialogContent` requires a `DialogTitle` for the component to be accessible for screen reader users.\r\n\r\nIf you want to hide the `DialogTitle`, you can wrap it with our VisuallyHidden component.\r\n\r\nFor more information, see https://radix-ui.com/primitives/docs/components/dialog [index-BMduUo-e.js:3231:165](http://127.0.0.1:7860/assets/index-BMduUo-e.js)\r\n TitleWarning http://127.0.0.1:7860/assets/index-BMduUo-e.js:3231\r\n Qj http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Hk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Wk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Pk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Ek http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n jg http://127.0.0.1:7860/assets/index-BMduUo-e.js:982\r\n Wk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Pk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Gk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Ct http://127.0.0.1:7860/assets/index-BMduUo-e.js:969\r\n xt http://127.0.0.1:7860/assets/index-BMduUo-e.js:969\r\n```\r\n\r\nI am also getting error when adding again earlier deleted variable:\r\n\"Sorry, we found an unexpected error!\r\nPlease report errors with detailed tracebacks on the [GitHub Issues](https://github.com/langflow-ai/langflow/issues) page.\r\nThank you!\"\r\n\r\nSo as asked, I am kindly reporting it.\r\n\r\nAlso, There is no feature to edit fields.\r\n\r\n\r\n### Reproduction\r\n\r\nJS error:\r\n1. Just enter the page\r\n\r\nSaving Error:\r\n1. saving new variable\r\n2. deleting this new variable\r\n3. adding it again with same name\r\n\r\n\r\n### Expected behavior\r\n\r\nIt should work without error\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Operating System\r\n\r\nWindows 11 pro\r\n\r\n### Langflow Version\r\n\r\n1.13\r\n\r\n### Python Version\r\n\r\nNone\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Flow File\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3284", "commit_html_url": null, "file_loc": "{'base_commit': '9d8009f2f5c5e3fd3bf47760debc787deb454b1a', 'files': [{'path': 'src/backend/base/langflow/api/v1/variable.py', 'status': 'modified', 'Loc': {\"(None, 'create_variable', 17)\": {'add': [22], 'mod': [26, 27, 28, 29, 30, 31, 32, 33, 34, 36, 37, 39, 40, 42, 44, 46, 47, 48, 49, 50, 51, 52]}, \"(None, 'read_variables', 60)\": {'add': [63], 'mod': [67, 68]}, \"(None, 'update_variable', 74)\": {'add': [79], 'mod': [83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95]}, \"(None, 'delete_variable', 101)\": {'add': [105], 'mod': [109, 110, 111, 112, 113, 114, 115]}, '(None, None, None)': {'mod': [1, 5, 7, 10, 11]}}}, {'path': 'src/backend/base/langflow/services/variable/base.py', 'status': 'modified', 'Loc': {\"('VariableService', None, 11)\": {'add': [84], 'mod': [72]}}}, {'path': 'src/backend/base/langflow/services/variable/kubernetes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [12, 13]}, \"('KubernetesSecretService', None, 16)\": {'add': [123], 'mod': [113, 114, 115, 116, 117, 118]}}}, {'path': 'src/backend/base/langflow/services/variable/service.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1], 'mod': [11]}, \"('DatabaseVariableService', 'get_variable', 68)\": {'add': [78], 'mod': [86, 87]}, \"('DatabaseVariableService', None, 22)\": {'add': [90, 111]}, \"('DatabaseVariableService', 'list_variables', 91)\": {'mod': [92, 93]}, \"('DatabaseVariableService', 'delete_variable', 112)\": {'mod': [118, 123]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/api/v1/variable.py", "src/backend/base/langflow/services/variable/base.py", "src/backend/base/langflow/services/variable/service.py", "src/backend/base/langflow/services/variable/kubernetes.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "1e98d349877305a8ee9c84901282b5731675578f", "is_iss": 0, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/803", "iss_label": "", "title": "debug mode not debug", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\nattribute in chat.py is not named correctly\n\n### Current behavior \ud83d\ude2f\n\nwrong name can't call the attrb on the object\n\n### Expected behavior \ud83e\udd14\n\nto use the attrb in a call without error\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\n# Paste your prompt here\r\n", "code": null, "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/888", "commit_html_url": null, "file_loc": "{'base_commit': '1e98d349877305a8ee9c84901282b5731675578f', 'files': [{'path': 'scripts/chat.py', 'status': 'modified', 'Loc': {\"(None, 'chat_with_ai', 42)\": {'mod': [67, 74, 113]}}}, {'path': 'scripts/json_parser.py', 'status': 'modified', 'Loc': {\"(None, 'fix_json', 76)\": {'mod': [94]}}}, {'path': 'scripts/main.py', 'status': 'modified', 'Loc': {\"(None, 'parse_arguments', 266)\": {'add': [268]}, '(None, None, None)': {'add': [294]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/json_parser.py", "scripts/main.py", "scripts/chat.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "0f152b4e97a102c0105f26d76d6e1bba3b12fc2a", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/894", "iss_label": "bug\nanswered\nreviewed", "title": "RecursionError from response model in 0.47.1", "body": "### Describe the bug\r\n\r\nFastAPI 0.47.1 will not be able to start due to a `RecursionError` when there is a circular reference among models. The issue seems to originate from https://github.com/tiangolo/fastapi/pull/889. This works fine in 0.46.0.\r\n\r\n### Environment\r\n\r\n- OS: Windows\r\n- FastAPI Version: 0.47.1\r\n- Python version: 3.7.0\r\n\r\n### To Reproduce\r\n\r\n```Python\r\nfrom typing import Optional\r\n\r\nfrom fastapi import FastAPI\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass Group(BaseModel):\r\n representative: Optional['Person'] = Field(None)\r\n\r\n\r\nclass Person(BaseModel):\r\n group: Optional[Group] = Field(None)\r\n\r\n\r\nGroup.update_forward_refs()\r\n\r\n\r\napp = FastAPI()\r\n\r\n\r\n@app.get('/group/{group_id}', response_model=Group)\r\ndef get_group(group_id):\r\n return []\r\n```\r\n\r\n### Expected behavior\r\n\r\nNo exception\r\n\r\n\r\n### Actual output\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 21, in \r\n @app.get('/group/{group_id}', response_model=Group)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 494, in decorator\r\n callbacks=callbacks,\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 438, in add_api_route\r\n callbacks=callbacks,\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 275, in __init__\r\n ] = create_cloned_field(self.response_field)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n [Previous line repeated 981 more times]\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 97, in create_cloned_field\r\n original_type.__name__, __config__=original_type.__config__\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\pydantic\\main.py\", line 773, in create_model\r\n return type(model_name, (__base__,), namespace)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\pydantic\\main.py\", line 152, in __new__\r\n if issubclass(base, BaseModel) and base != BaseModel:\r\n File \"D:\\virtualenvs\\test\\lib\\abc.py\", line 143, in __subclasscheck__\r\n return _abc_subclasscheck(cls, subclass)\r\nRecursionError: maximum recursion depth exceeded in comparison\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/fastapi/fastapi/commit/0f152b4e97a102c0105f26d76d6e1bba3b12fc2a", "file_loc": "{'base_commit': '0f152b4e97a102c0105f26d76d6e1bba3b12fc2a', 'files': [{'path': 'fastapi/utils.py', 'status': 'modified', 'Loc': {\"(None, 'create_cloned_field', 134)\": {'mod': [134, 141, 142, 143, 160, 163]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/utils.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "05676caf70db7f3715cf6a3b4680f15efd45977a", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/6202", "iss_label": "bug\nstale", "title": "Llama-cpp-python 0.2.81 'already loaded' fails to load models", "body": "### Describe the bug\r\n\r\nAttempting to load a model after running the update-wizard-macos today (the version from a day or two ago worked fine) fails with the stack trace log included below. \r\n\r\nNotably, the error message references [this new issue in llama-cpp-python](https://github.com/abetlen/llama-cpp-python/issues/1575).\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n- Run the update wizard to update software.\r\n- Attempt to load a gguf model using the GPU and llama.cpp\r\n- Observe that loading fails.\r\n\r\n### Screenshot\r\n\r\n![Screenshot 2024-07-04 at 11 10 47\u202fPM](https://github.com/oobabooga/text-generation-webui/assets/9359101/72148b05-8a43-4d2e-9fd5-7ba6fa57b317)\r\n\r\n\r\n### Logs\r\n\r\n```shell\r\nTraceback (most recent call last):\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/ui_model_menu.py\", line 246, in load_model_wrapper\r\n shared.model, shared.tokenizer = load_model(selected_model, loader)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/models.py\", line 94, in load_model\r\n output = load_func_map[loader](model_name)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/models.py\", line 275, in llamacpp_loader\r\n model, tokenizer = LlamaCppModel.from_pretrained(model_file)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/llamacpp_model.py\", line 39, in from_pretrained\r\n LlamaCache = llama_cpp_lib().LlamaCache\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/llama_cpp_python_hijack.py\", line 38, in llama_cpp_lib\r\n raise Exception(f\"Cannot import 'llama_cpp_cuda' because '{imported_module}' is already imported. See issue #1575 in llama-cpp-python. Please restart the server before attempting to use a different version of llama-cpp-python.\")\r\nException: Cannot import 'llama_cpp_cuda' because 'llama_cpp' is already imported. See issue #1575 in llama-cpp-python. Please restart the server before attempting to use a different version of llama-cpp-python.\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nM1 Max Macbook Pro, MacOS 14.5\r\n```\r\n\r\nEdit: Just realized that Ooobabooga was the one that created that issue on the llama-cpp-python project, so I guess this error was already known. Sorry if this issue is therefore somewhat redundant\r\n\r\n", "code": null, "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/6227", "commit_html_url": null, "file_loc": "{'base_commit': '05676caf70db7f3715cf6a3b4680f15efd45977a', 'files': [{'path': 'modules/llama_cpp_python_hijack.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, \"(None, 'llama_cpp_lib', 13)\": {'mod': [16, 17, 18, 19, 20, 21, 22, 24, 26, 28, 29, 30, 31, 32, 33, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62, 64, 65, 67]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/llama_cpp_python_hijack.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "3c076c3c8096fa83440d701ba4d7d49606aaf61f", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2958", "iss_label": "bug", "title": "Latest version of Pillow breaks current implementation in html_generator.py.", "body": "### Describe the bug\n\nPillow 10.0.0 removed `ANTIALIAS` from `PIL.Image`. Current implementation requires 9.5.0, however the requirements.txt currently allows for 10.0.0 to be installed.\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nAdd new characters with png images and load the webui in chat mode.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nTraceback (most recent call last):\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\routes.py\", line 427, in run_predict\r\n output = await app.get_blocks().process_api(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\blocks.py\", line 1323, in process_api\r\n result = await self.call_function(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\blocks.py\", line 1051, in call_function\r\n prediction = await anyio.to_thread.run_sync(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\extensions\\gallery\\script.py\", line 71, in generate_html\r\n image_html = f''\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\modules\\html_generator.py\", line 144, in get_image_cache\r\n img = make_thumbnail(Image.open(path))\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\modules\\html_generator.py\", line 132, in make_thumbnail\r\n image = ImageOps.fit(image, (350, 470), Image.ANTIALIAS)\r\nAttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'\n```\n\n\n### System Info\n\n```shell\nWindows 10\r\nGPU: GTX 1080ti\n```\n", "code": null, "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/2954", "commit_html_url": null, "file_loc": "{'base_commit': '3c076c3c8096fa83440d701ba4d7d49606aaf61f', 'files': [{'path': 'modules/html_generator.py', 'status': 'modified', 'Loc': {\"(None, 'make_thumbnail', 129)\": {'mod': [132]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Other\n\u4f9d\u8d56\u58f0\u660e "}, "loctype": {"code": ["modules/html_generator.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "cf2c4e740b1d06e145c1992515d9b34e18affc95", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/801", "iss_label": "enhancement", "title": "How can we disable Gradio analytics?", "body": "**Description**\r\n\r\nHow where / can this be implemented?\r\n\r\nhttps://github.com/brkirch/stable-diffusion-webui/commit/a534959cbcabc95af50fbbe4654f8c0ee1cdd41c\r\n\r\n`os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False'`\r\n\r\n**Additional Context**\r\n\r\nFor [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)\r\n\r\n[preserve privacy by disabling gradio analytics globally\r\n#8658 ](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/8658)\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/cf2c4e740b1d06e145c1992515d9b34e18affc95", "file_loc": "{'base_commit': 'cf2c4e740b1d06e145c1992515d9b34e18affc95', 'files': [{'path': 'server.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["server.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "a3085dba073fe8bdcfb5120729a84560f5d024c3", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/1000", "iss_label": "bug", "title": "Bot spams random numbers or does not load", "body": "### Describe the bug\n\nHello,\r\nI installed oobabooga with the one click installer and I can not load the facebook_opt-2.7b (I copied the console into the log).\r\nI also installed the gpt4x alpaca model with the automatic installer(download-model.bat). If I chat with it, it just spams random 2 and 4 (I took a screenshot and pasted it down below). If I manually install the gpt4x model (with the help of this tutorial: https://youtu.be/nVC9D9fRyNU?t=162 ), I get the same output as the Facebook model in the log. \n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\n1. Automatic installer\r\n2. let download-model.bat download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g or follow this tutorial:\r\n3. let download-model.bat download one model from the list\r\n4. start-webui.bat has following arguments: python server.py --auto-devices --chat --wbits 4 --groupsize 128\n\n### Screenshot\n\n![image](https://user-images.githubusercontent.com/125409728/230895729-d2f12173-81a5-4de6-9296-71845906ab01.png)\r\n\n\n### Logs\n\n```shell\nStarting the web UI...\r\n\r\n===================================BUG REPORT===================================\r\nWelcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\r\n================================================================================\r\nCUDA SETUP: CUDA runtime path found: C:\\Users\\Caner\\Documents\\oobabooga-windows\\installer_files\\env\\bin\\cudart64_110.dll\r\nCUDA SETUP: Highest compute capability among GPUs detected: 8.6\r\nCUDA SETUP: Detected CUDA version 117\r\nCUDA SETUP: Loading binary C:\\Users\\Caner\\Documents\\oobabooga-windows\\installer_files\\env\\lib\\site-packages\\bitsandbytes\\libbitsandbytes_cuda117.dll...\r\nThe following models are available:\r\n\r\n1. anon8231489123_gpt4-x-alpaca-13b-native-4bit-128g\r\n2. facebook_opt-2.7b\r\n\r\nWhich one do you want to load? 1-2\r\n\r\n2\r\n\r\nLoading facebook_opt-2.7b...\r\nCould not find the quantized model in .pt or .safetensors format, exiting...\r\nDr\u00fccken Sie eine beliebige Taste . . .\n```\n\n\n### System Info\n\n```shell\nWindows 10 Version 22H2, Amd Ryzen 5800x, Palit Gamingpro Rtx 3080.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/a3085dba073fe8bdcfb5120729a84560f5d024c3", "file_loc": "{'base_commit': 'a3085dba073fe8bdcfb5120729a84560f5d024c3', 'files': [{'path': 'modules/models.py', 'status': 'modified', 'Loc': {\"(None, 'load_model', 40)\": {'add': [176]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/models.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "c4aa1a42b156b9c5ddcfb060cc497b2fba55430f", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/1203", "iss_label": "bug", "title": "When I click \"Click Me\" on (Character screen) it no longer generates a log of the current chat/instruction. ", "body": "### Describe the bug\n\nWhen I click \"Click Me\" on (Character screen) it no longer generates a log of the current chat/instruction. \r\n\r\nThis either got caused when I did an update to the whole program. Reinstall does not fix. I also ran some Antivirus and registry changes. I can generate new note pads and I reinstalled notepad on my PC. \r\n\r\nI've of course restarted my pc and I've tried firefox and opera as the host browser. This is a new problem just from today. But I did a few things on my PC. \r\n\r\nNote pad is also missing from my right click \"Create new\" list. However it is is in my folders create new list. The one you can click on in the top menu settings inside explorer. The first thing is I guess I need to rule out if others having the issue or not. Next it would be nice to get some utility to auto save all logs or something as an option. \r\n\r\nThanks for any advice anyone. If it's a bug I will simply wait for a fix. It's possible I may have messed something but but it may also be a design flaw of the program if it's dependant on just one specific thing to save the chat log. \r\n\r\nI have no logs or other info to share at this time. \n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nFor me it's easy. Issue persists even if I restart PC, update or reinstall oobabooga. It worked last night. Today it does not work. \n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nNot applicable.\n```\n\n\n### System Info\n\n```shell\n12700K, Nvidia 4080, Windows 10. Running locally on my PC not a colab etc. Like I said I tried firefox and opera. Issue seems persistent.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/c4aa1a42b156b9c5ddcfb060cc497b2fba55430f", "file_loc": "{'base_commit': 'c4aa1a42b156b9c5ddcfb060cc497b2fba55430f', 'files': [{'path': 'server.py', 'status': 'modified', 'Loc': {\"(None, 'create_model_menus', 251)\": {'mod': [324, 349]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["server.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2326", "iss_label": "bug", "title": "extensions/openai KeyError: 'assistant'", "body": "### Describe the bug\r\n\r\nStarting after [https://github.com/oobabooga/text-generation-webui/pull/2291]\r\n\r\nWhich I think it's a great improvement.\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\nStart server with extension --openai --model openaccess-ai-collective_manticore-13b.\r\nStarting [DGdev91 Auto-GPT](https://github.com/DGdev91/Auto-GPT), runs 1 cycle, give 'y' for the second, the error appears.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nopenaccess-ai-collectiveException occurred during processing of request from ('127.0.0.1', 42032)\r\nTraceback (most recent call last):\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 683, in process_request_thread\r\n self.finish_request(request, client_address)\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 360, in finish_request\r\n self.RequestHandlerClass(request, client_address, self)\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 747, in __init__\r\n self.handle()\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/http/server.py\", line 433, in handle\r\n self.handle_one_request()\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/http/server.py\", line 421, in handle_one_request\r\n method()\r\n File \"/home/mihai/text-generation-webui/extensions/openai/script.py\", line 310, in do_POST\r\n msg = role_formats[role].format(message=content)\r\nKeyError: 'assistant'\r\n----------------------------------------\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nWin11 WSL2 Ubuntu 20.04\r\nPython 3.10\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df", "file_loc": "{'base_commit': '2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df', 'files': [{'path': 'extensions/openai/script.py', 'status': 'modified', 'Loc': {\"('Handler', 'do_POST', 159)\": {'mod': [262]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["extensions/openai/script.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "92a0994f01ec6ae7756951312a70e101fb33c7e5", "is_iss": 0, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/597", "iss_label": "", "title": "The app starts transparent!", "body": "Seems like everything was ok with the install.\r\n\r\nWhen I run I get the error: / warning:\r\n\r\n```\r\n> python run.py --execution-provider cuda\r\nException in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 1921, in __call__\r\n return self.func(*args)\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 839, in callit\r\n func(*args)\r\n File \"C:\\Users\\dulci\\Documents\\Development\\Deep-Live-Cam\\.venv\\lib\\site-packages\\customtkinter\\windows\\widgets\\scaling\\scaling_tracker.py\", line \r\n186, in check_dpi_scaling\r\n window.block_update_dimensions_event()\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 2383, in __getattr__\r\n return getattr(self.tk, attr)\r\nAttributeError: '_tkinter.tkapp' object has no attribute 'block_update_dimensions_event'\r\n(.venv) PS C:\\Users\\dulci\\Documents\\Development\\Deep-Live-Cam>\r\n``` \r\n\r\nAnd the application starts so transparent (opacity super low) that I can barely see it and I can't use it, because it is almost invisible against the desktop background.\r\n\r\nCan anybody suggest how to fix it?\r\n\r\nI am working with venv python 3.10.6.\r\n", "code": null, "pr_html_url": "https://github.com/hacksider/Deep-Live-Cam/pull/632", "commit_html_url": null, "file_loc": "{'base_commit': '92a0994f01ec6ae7756951312a70e101fb33c7e5', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 2)': {'add': [2]}, '(None, None, 3)': {'add': [3]}, '(None, None, 37)': {'add': [37]}, '(None, None, 47)': {'add': [47]}, '(None, None, 68)': {'add': [68]}, '(None, None, 168)': {'add': [168]}, '(None, None, 365)': {'add': [365]}, '(None, None, 6)': {'mod': [6]}, '(None, None, 8)': {'mod': [8]}, '(None, None, 10)': {'mod': [10]}, '(None, None, 12)': {'mod': [12]}, '(None, None, 15)': {'mod': [15]}, '(None, None, 20)': {'mod': [20]}, '(None, None, 24)': {'mod': [24]}, '(None, None, 28)': {'mod': [28]}, '(None, None, 32)': {'mod': [32]}, '(None, None, 36)': {'mod': [36]}, '(None, None, 39)': {'mod': [39, 40, 41]}, '(None, None, 43)': {'mod': [43, 44, 46]}, '(None, None, 49)': {'mod': [49, 50, 51, 52, 53, 54, 55, 56, 57]}, '(None, None, 59)': {'mod': [59]}, '(None, None, 61)': {'mod': [61, 62]}, '(None, None, 64)': {'mod': [64]}, '(None, None, 66)': {'mod': [66, 67]}, '(None, None, 71)': {'mod': [71, 72]}, '(None, None, 75)': {'mod': [75]}, '(None, None, 77)': {'mod': [77]}, '(None, None, 82)': {'mod': [82]}, '(None, None, 84)': {'mod': [84, 85, 86]}, '(None, None, 91)': {'mod': [91, 92]}, '(None, None, 96)': {'mod': [96]}, '(None, None, 98)': {'mod': [98, 100]}, '(None, None, 105)': {'mod': [105, 106]}, '(None, None, 110)': {'mod': [110]}, '(None, None, 112)': {'mod': [112, 113]}, '(None, None, 118)': {'mod': [118, 119]}, '(None, None, 123)': {'mod': [123]}, '(None, None, 125)': {'mod': [125, 126]}, '(None, None, 131)': {'mod': [131, 132]}, '(None, None, 136)': {'mod': [136]}, '(None, None, 138)': {'mod': [138, 139]}, '(None, None, 144)': {'mod': [144, 145]}, '(None, None, 150)': {'mod': [150, 151]}, '(None, None, 153)': {'mod': [153, 154]}, '(None, None, 156)': {'mod': [156]}, '(None, None, 158)': {'mod': [158, 159, 160, 161, 162]}, '(None, None, 164)': {'mod': [164]}, '(None, None, 166)': {'mod': [166, 167]}, '(None, None, 170)': {'mod': [170]}, '(None, None, 197)': {'mod': [197]}, '(None, None, 206)': {'mod': [206]}, '(None, None, 210)': {'mod': [210]}, '(None, None, 224)': {'mod': [224]}, '(None, None, 237)': {'mod': [237]}, '(None, None, 247)': {'mod': [247]}, '(None, None, 274)': {'mod': [274]}, '(None, None, 306)': {'mod': [306]}, '(None, None, 314)': {'mod': [314]}, '(None, None, 343)': {'mod': [343, 344]}, '(None, None, 346)': {'mod': [346, 347]}, '(None, None, 353)': {'mod': [353]}, '(None, None, 360)': {'mod': [360]}}}, {'path': 'modules/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6, 548], 'mod': [10, 13, 18, 19, 22, 23, 24, 25, 27, 28, 29, 30, 32, 33, 34, 35, 37, 38]}, \"(None, 'analyze_target', 154)\": {'add': [167], 'mod': [155, 163, 166]}, \"(None, 'create_source_target_popup', 176)\": {'add': [182], 'mod': [191, 192, 200, 201, 203, 204, 206, 207, 210, 211, 214, 215, 217, 218, 221, 222, 224]}, \"(None, 'update_popup_source', 221)\": {'add': [233], 'mod': [228, 229, 238, 240, 241, 242, 243, 245, 246, 249, 250]}, \"(None, 'select_source_path', 290)\": {'add': [299, 302], 'mod': [294]}, \"(None, 'swap_faces_paths', 305)\": {'add': [323, 326]}, \"(None, 'select_target_path', 329)\": {'add': [338, 343, 346], 'mod': [333]}, \"(None, 'update_preview', 435)\": {'add': [445], 'mod': [437, 438, 441, 443, 444, 450]}, \"(None, 'init', 61)\": {'mod': [61]}, \"(None, 'create_root', 70)\": {'mod': [70, 73, 74, 75, 77, 78, 79, 80, 81, 83, 84, 86, 87, 89, 90, 92, 93, 95, 96, 99, 100, 103, 104, 106, 107, 108, 109, 112, 113, 116, 117, 119, 121, 122, 124, 125, 126, 129, 130, 132, 133, 135, 136, 138, 139, 141, 142, 144, 145, 147, 148, 149, 150]}, \"(None, 'on_submit_click', 184)\": {'mod': [189]}, \"(None, 'on_button_click', 194)\": {'mod': [195, 197, 198]}, \"(None, 'create_preview', 258)\": {'mod': [263, 265, 268, 269, 271]}, \"(None, 'select_output_path', 349)\": {'mod': [353, 355]}, \"(None, 'check_and_ignore_nsfw', 364)\": {'mod': [365, 367, 370, 372, 375, 376, 378]}, \"(None, 'fit_image_to_size', 381)\": {'mod': [390]}, \"(None, 'toggle_preview', 417)\": {'mod': [418]}, \"(None, 'init_preview', 425)\": {'mod': [431]}, \"(None, 'create_webcam_preview', 463)\": {'mod': [466, 467, 468, 469, 471, 484, 487, 512]}, \"(None, 'on_submit_click', 527)\": {'mod': [533]}, \"(None, 'create_source_target_popup_for_webcam', 519)\": {'mod': [540, 543, 546]}, \"(None, 'refresh_data', 550)\": {'mod': [553, 554, 563, 565, 566, 568, 569, 571, 572, 575, 576, 579, 580, 581, 584, 585, 588, 589, 590]}, \"(None, 'update_webcam_source', 593)\": {'mod': [596, 600, 601, 610, 612, 613, 614, 615, 617, 618, 621, 622, 623, 624]}, \"(None, 'update_webcam_target', 629)\": {'mod': [629, 632, 636, 637, 646, 648, 649, 650, 651, 653, 654, 657, 658, 659, 660]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "Textualize", "repo_name": "rich", "base_commit": "4be4f6bd24d2a35da0e50df943209ad24c068159", "is_iss": 0, "iss_html_url": "https://github.com/Textualize/rich/issues/388", "iss_label": "enhancement\ndone", "title": "[REQUEST] minimal table width", "body": "hey @willmcgugan, great package!\r\n\r\nin `Table.add_column` method it would be nice to have an _actual_ minimal with option. [The documentation](https://github.com/willmcgugan/rich/blob/c98bf070e4f3785dbb050b72c09663021c5b1d73/rich/table.py#L303) says that `width` argument sets the minimal with for a column, but in fact in my tests it sets a constant width for one. I'd like my column to be at least `width` wide but expand if there is a longer string to display.", "code": null, "pr_html_url": "https://github.com/Textualize/rich/pull/391", "commit_html_url": null, "file_loc": "{'base_commit': '4be4f6bd24d2a35da0e50df943209ad24c068159', 'files': [{'path': 'CHANGELOG.md', 'status': 'modified', 'Loc': {'(None, None, 26)': {'add': [26]}}}, {'path': 'rich/console.py', 'status': 'modified', 'Loc': {\"('Console', 'rule', 991)\": {'mod': [993]}}}, {'path': 'rich/measure.py', 'status': 'modified', 'Loc': {\"('Measurement', None, 11)\": {'add': [45]}, \"('Measurement', 'with_maximum', 34)\": {'mod': [41]}}}, {'path': 'rich/table.py', 'status': 'modified', 'Loc': {\"('Column', None, 29)\": {'add': [50, 54]}, \"('Table', '__init__', 118)\": {'add': [123, 157]}, \"('Table', 'add_column', 278)\": {'add': [288, 303, 321]}, \"('Table', '_calculate_column_widths', 410)\": {'add': [417], 'mod': [458, 459]}, \"('Table', '_measure_column', 558)\": {'add': [587]}, \"('Table', '__rich_measure__', 241)\": {'mod': [249, 254, 255, 265]}, '(None, None, None)': {'mod': [740, 741, 743, 745, 746, 747, 748, 749, 750, 751, 753, 754, 755, 756, 757, 758, 759, 761]}}}, {'path': 'tests/test_measure.py', 'status': 'modified', 'Loc': {\"(None, 'test_measure_renderables', 29)\": {'add': [33]}}}, {'path': 'tests/test_table.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 125]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/measure.py", "rich/console.py", "rich/table.py"], "doc": ["CHANGELOG.md"], "test": ["tests/test_table.py", "tests/test_measure.py"], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "9fa57892790ce205634f6a7c83de2b9e52ab5284", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/8799", "iss_label": "site-support-request\naccount-needed", "title": "Request support site: Viceland", "body": "Viceland is a new channel from Vice. Website at https://www.viceland.com. Appears to use Uplynk, and may be encrypted, so it may not be possible.\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/9fa57892790ce205634f6a7c83de2b9e52ab5284", "file_loc": "{'base_commit': '9fa57892790ce205634f6a7c83de2b9e52ab5284', 'files': [{'path': 'youtube_dl/extractor/uplynk.py', 'status': 'modified', 'Loc': {\"('UplynkIE', None, 13)\": {'add': [51], 'mod': [29, 30]}, \"('UplynkIE', '_real_extract', 52)\": {'mod': [53]}, \"('UplynkPreplayIE', '_real_extract', 59)\": {'mod': [64]}}}, {'path': 'youtube_dl/extractor/viceland.py', 'status': 'modified', 'Loc': {\"('VicelandIE', None, 20)\": {'add': [27]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/uplynk.py", "youtube_dl/extractor/viceland.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "5dbe81a1d35ae704b5ea208698a6bb785923d71a", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/8171", "iss_label": "", "title": "Vimeo ondemand download preiview only ", "body": "I try to download a video from vimeo on demand but i only get the preview of. \ncould someone help me plaese \n\n youtube-dl -u .com -p https://vimeo.com/ondemand/thelastcolony/150274832 --verbo\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'-u', u'PRIVATE', u'-p', u'PRIVATE', u'https://vimeo.com/ondemand/thelastcolony/150274832', u'--verbo']\n[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252\n[debug] youtube-dl version 2016.01.01\n[debug] Python version 2.7.10 - Windows-8-6.2.9200\n[debug] exe versions: none\n[debug] Proxy map: {}\n[vimeo] Logging in\n[vimeo] 150274832: Downloading webpage\n[vimeo] 150274832: Extracting information\n[vimeo] 150274832: Downloading webpage\n[vimeo] 150274832: Downloading JSON metadata\n[vimeo] 150274832: Downloading m3u8 information\n[debug] Invoking downloader on u'https://01-lvl3-gcs-pdl.vimeocdn.com/vimeo-prod-skyfire-std-us/01/54/6/150274832/459356950.mp4?expires=1452223995&token=00c3c5830ebe84f9310d4'\n[download] The Last Colony-150274832.mp4 has already been downloaded\n[download] 100% of 119.97MiB\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/5dbe81a1d35ae704b5ea208698a6bb785923d71a", "file_loc": "{'base_commit': '5dbe81a1d35ae704b5ea208698a6bb785923d71a', 'files': [{'path': 'youtube_dl/extractor/vimeo.py', 'status': 'modified', 'Loc': {\"('VimeoIE', '_real_extract', 264)\": {'add': [356], 'mod': [265, 267, 269, 345]}, \"('VimeoIE', '_extract_vimeo_url', 214)\": {'mod': [220]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/vimeo.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "365343131d752bece96d2279a3e0bcd7e9f0000f", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/17728", "iss_label": "", "title": "[PluralSight] Unable to download captions JSON: HTTP Error 404: Not Found", "body": "last version I tested\r\nyoutube-dl is up-to-date (2018.09.26)\r\n\r\nI'm try to download video from PluralSight. Video is ok but subtitle is cannot download. The error is \r\nWARNING: Unable to download captions JSON: HTTP Error 404: Not Found\r\nMy command: \r\n\r\nyoutube-dl --username xxx --password xxxx --sleep-interval 35 --max-sleep-interval 120 --sub-lang en --sub-format srt --write-sub https://app.pluralsight.com/library/courses/design-database-structure-sql-server-2014-70-465/table-of-contents\r\n\r\nThanks.", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/365343131d752bece96d2279a3e0bcd7e9f0000f", "file_loc": "{'base_commit': '365343131d752bece96d2279a3e0bcd7e9f0000f', 'files': [{'path': 'youtube_dl/extractor/pluralsight.py', 'status': 'modified', 'Loc': {\"('PluralsightIE', None, 112)\": {'mod': [213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224]}, \"('PluralsightIE', '_real_extract', 271)\": {'mod': [416]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/pluralsight.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "50f84a9ae171233c08ada41e03f6555c5ed95236", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/7427", "iss_label": "", "title": "\"ERROR: Signature extraction failed\" for youtube video", "body": "Hi,\n\nI have encountered an error with a video:\nhttps://www.youtube.com/watch?v=LDvVYqUMuJ0\n\n```\n$ /tmp/ydl/youtube-dl/youtube-dl https://www.youtube.com/watch?v=LDvVYqUMuJ0 --verbose\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'https://www.youtube.com/watch?v=LDvVYqUMuJ0', u'--verbose']\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\n[debug] youtube-dl version 2015.11.02\n[debug] Python version 2.7.10+ - Linux-4.2.0-1-amd64-x86_64-with-debian-stretch-sid\n[debug] exe versions: ffmpeg 1.0.6, ffprobe 1.0.6, rtmpdump 2.4\n[debug] Proxy map: {}\n[youtube] LDvVYqUMuJ0: Downloading webpage\n[youtube] LDvVYqUMuJ0: Downloading video info webpage\n[youtube] LDvVYqUMuJ0: Extracting video information\nWARNING: unable to extract html5 player; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n[youtube] {22} signature length 40.44, html5 player None\nERROR: Signature extraction failed: Traceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n (caused by ExtractorError(u\"Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\",)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\nTraceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\nTraceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/YoutubeDL.py\", line 661, in extract_info\n ie_result = ie.extract(url)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/common.py\", line 290, in extract\n return self._real_extract(url)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 1345, in _real_extract\n encrypted_sig, video_id, player_url, age_gate)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 827, in _decrypt_signature\n 'Signature extraction failed: ' + tb, cause=e)\nExtractorError: Signature extraction failed: Traceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n (caused by ExtractorError(u\"Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\",)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n```\n\nThanks,\nCorey\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/50f84a9ae171233c08ada41e03f6555c5ed95236", "file_loc": "{'base_commit': '50f84a9ae171233c08ada41e03f6555c5ed95236', 'files': [{'path': 'youtube_dl/extractor/youtube.py', 'status': 'modified', 'Loc': {\"('YoutubeIE', '_extract_signature_function', 704)\": {'mod': [706]}, \"('YoutubeIE', '_real_extract', 1008)\": {'mod': [1346]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16139", "iss_label": "fixed", "title": "ITV BTCC videos support?", "body": "## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.04.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [*] I've **verified** and **I assure** that I'm running youtube-dl **2018.04.09**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [*] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [*] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [*] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [*] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v `), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n$ youtube-dl -v https://pastebin.com/raw/KxD6rhpF --geo-bypass-country UK\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'https://pastebin.com/raw/KxD6rhpF', u'--geo-bypass-country', u'UK']\r\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2018.04.09\r\n[debug] Python version 2.7.13 (CPython) - Linux-4.9.62-v7+-armv7l-with-debian-9.4\r\n[debug] exe versions: ffmpeg 3.2.10-1, ffprobe 3.2.10-1\r\n[debug] Proxy map: {}\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[generic] KxD6rhpF: Requesting header\r\nWARNING: Falling back on generic information extractor.\r\n[generic] KxD6rhpF: Downloading webpage\r\n[generic] KxD6rhpF: Extracting information\r\n[download] Downloading playlist: Brightcove video tester\r\n[generic] playlist Brightcove video tester: Collected 1 video ids (downloading 1 of them)\r\n[download] Downloading video 1 of 1\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[brightcove:new] 5766870719001: Downloading webpage\r\n[brightcove:new] 5766870719001: Downloading JSON metadata\r\nERROR: Access to this resource is forbidden by access policy.\r\nYou might want to use a VPN or a proxy server (with --proxy) to workaround.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/brightcove.py\", line 706, in _real_extract\r\n json_data = self._download_json(api_url, video_id, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 692, in _download_json\r\n encoding=encoding, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 634, in _download_webpage\r\n res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/adobepass.py\", line 1332, in _download_webpage_handle\r\n *args, **compat_kwargs(kwargs))\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 539, in _download_webpage_handle\r\n urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 528, in _request_webpage\r\n raise ExtractorError(errmsg, sys.exc_info()[2], cause=err)\r\nExtractorError: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py\", line 789, in extract_info\r\n ie_result = ie.extract(url)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 440, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/brightcove.py\", line 712, in _real_extract\r\n self.raise_geo_restricted(msg=message)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 743, in raise_geo_restricted\r\n raise GeoRestrictedError(msg, countries=countries)\r\nGeoRestrictedError: Access to this resource is forbidden by access policy.\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n\r\nITV separated the BTCC race videos from the hub (which also seems to be having issues as per https://github.com/rg3/youtube-dl/issues/15925)\r\nLately the video are hosted at http://www.itv.com/btcc/races (ie for a particular weekend all videos are posted at individual pages like: http://www.itv.com/btcc/races/btcc-2018-all-the-action-from-brands-hatch)\r\n\r\nSkimming the source code of this sample weekend page, I extracted the vid params and built a test page:\r\n- Single video: https://pastebin.com/raw/KxD6rhpF\r\n\r\n**Question 1:** is the log error above pointing just to a geo restriction error or is there anything else involved that I missed? (ie: like writing some header to force the ITV scrapper to act instead of a generic one)\r\n\r\n```\r\n\r\n \r\n\r\n \r\n Brightcove video tester\r\n \r\n \r\n\r\n\r\n\r\n\t\r\n\r\n\t\r\n\r\n\r\n```\r\n\r\n**Question 2:** is there any way to generate a playlist of downloadable items based on pages like http://www.itv.com/btcc/races/btcc-2018-all-the-action-from-brands-hatch with youtube-dl?", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd", "file_loc": "{'base_commit': 'ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd', 'files': [{'path': 'youtube_dl/extractor/extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [480]}}}, {'path': 'youtube_dl/extractor/itv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 20]}, \"('ITVIE', '_real_extract', 56)\": {'add': [262]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/itv.py", "youtube_dl/extractor/extractors.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "localstack", "repo_name": "localstack", "base_commit": "78f635ad3a8f819645f3991dfd244ff09f06a7f0", "is_iss": 0, "iss_html_url": "https://github.com/localstack/localstack/issues/8833", "iss_label": "type: bug\naws:cloudformation\nstatus: backlog", "title": "bug: CDK Table build with replicationRegions failing on latest", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nTrying to deploy a [CDK Table](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_dynamodb.Table.html) to a localstack environment results in the following:\r\n\r\n```\r\nstack | 0/3 | 8:56:58 PM | CREATE_FAILED | AWS::CloudFormation::Stack | stack Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once\r\n```\r\n\r\n```\r\nlocalstack | 2023-08-06T03:56:28.709 DEBUG --- [ asgi_gw_4] localstack.packages.api : Installation of dynamodb-local skipped (already installed).\r\nlocalstack | 2023-08-06T03:56:28.709 DEBUG --- [ asgi_gw_4] l.services.dynamodb.server : Starting DynamoDB Local: ['java', '-Xmx256m', '-javaagent:/usr/lib/localstack/dynamodb-local/latest/ddb-local-loader-0.1.jar', '-Djava.library.path=/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal_lib', '-jar', '/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal.jar', '-port', '35799', '-dbPath', '/var/lib/localstack/tmp/state/dynamodb']\r\nlocalstack | 2023-08-06T03:56:28.710 DEBUG --- [uncthread160] localstack.utils.run : Executing command: ['java', '-Xmx256m', '-javaagent:/usr/lib/localstack/dynamodb-local/latest/ddb-local-loader-0.1.jar', '-Djava.library.path=/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal_lib', '-jar', '/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal.jar', '-port', '35799', '-dbPath', '/var/lib/localstack/tmp/state/dynamodb']\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : Initializing DynamoDB Local with the following configuration:\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : Port:\t35799\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : InMemory:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : DbPath:\t/var/lib/localstack/tmp/state/dynamodb\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : SharedDb:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : shouldDelayTransientStatuses:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : CorsParams:\tnull\r\nlocalstack | 2023-08-06T03:56:28.950 DEBUG --- [uncthread160] l.services.dynamodb.server :\r\nlocalstack | 2023-08-06T03:56:29.715 INFO --- [ asgi_gw_4] botocore.credentials : Found credentials in environment variables.\r\nlocalstack | 2023-08-06T03:56:30.532 DEBUG --- [ asgi_gw_4] l.services.plugins : checking service health dynamodb:4566\r\nlocalstack | 2023-08-06T03:56:30.534 INFO --- [ asgi_gw_4] localstack.utils.bootstrap : Execution of \"require\" took 1887.18ms\r\nlocalstack | 2023-08-06T03:56:30.879 DEBUG --- [ asgi_gw_1] l.services.plugins : checking service health kinesis:4566\r\nlocalstack | 2023-08-06T03:56:30.886 INFO --- [ asgi_gw_1] l.s.k.kinesis_mock_server : Creating kinesis backend for account 000000000000\r\nlocalstack | 2023-08-06T03:56:30.887 DEBUG --- [ asgi_gw_1] localstack.packages.api : Starting installation of kinesis-local...\r\nlocalstack | 2023-08-06T03:56:30.887 DEBUG --- [ asgi_gw_1] localstack.utils.run : Executing command: ['npm', 'install', '--prefix', '/var/lib/localstack/lib/kinesis-local/0.4.2', 'kinesis-local@0.4.2']\r\nlocalstack | 2023-08-06T03:56:32.575 DEBUG --- [ asgi_gw_1] localstack.packages.core : Setting ownership root:root on /var/lib/localstack/lib/kinesis-local/0.4.2\r\nlocalstack | 2023-08-06T03:56:32.575 DEBUG --- [ asgi_gw_1] localstack.packages.api : Installation of kinesis-local finished.\r\nlocalstack | 2023-08-06T03:56:32.576 DEBUG --- [ asgi_gw_1] l.s.k.kinesis_mock_server : starting kinesis process ['node', PosixPath('/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/main.js')] with env vars {'KINESIS_MOCK_CERT_PATH': '/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/server.json', 'KINESIS_MOCK_PLAIN_PORT': 42209, 'KINESIS_MOCK_TLS_PORT': 34279, 'SHARD_LIMIT': '100', 'ON_DEMAND_STREAM_COUNT_LIMIT': '10', 'AWS_ACCOUNT_ID': '000000000000', 'CREATE_STREAM_DURATION': '500ms', 'DELETE_STREAM_DURATION': '500ms', 'REGISTER_STREAM_CONSUMER_DURATION': '500ms', 'START_STREAM_ENCRYPTION_DURATION': '500ms', 'STOP_STREAM_ENCRYPTION_DURATION': '500ms', 'DEREGISTER_STREAM_CONSUMER_DURATION': '500ms', 'MERGE_SHARDS_DURATION': '500ms', 'SPLIT_SHARD_DURATION': '500ms', 'UPDATE_SHARD_COUNT_DURATION': '500ms', 'UPDATE_STREAM_MODE_DURATION': '500ms', 'SHOULD_PERSIST_DATA': 'true', 'PERSIST_PATH': '../../../var/lib/localstack/tmp/state/kinesis', 'PERSIST_FILE_NAME': '000000000000.json', 'PERSIST_INTERVAL': '5s', 'LOG_LEVEL': 'INFO'}\r\nlocalstack | 2023-08-06T03:56:32.576 DEBUG --- [uncthread166] localstack.utils.run : Executing command: ['node', PosixPath('/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/main.js')]\r\nlocalstack | 2023-08-06T03:56:32.834 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.823005Z contextId=6956dd23-c61e-4aa1-80ce-b8bfc8d0894b, cacheConfig={\"awsAccountId\":\"000000000000\",\"awsRegion\":\"us-east-1\",\"createStreamDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"deleteStreamDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"deregisterStreamConsumerDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"initializeStreams\":null,\"logLevel\":\"INFO\",\"mergeShardsDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"onDemandStreamCountLimit\":10,\"persistConfig\":{\"fileName\":\"000000000000.json\",\"interval\":{\"length\":5,\"unit\":\"SECONDS\"},\"loadIfExists\":true,\"path\":\"../../../var/lib/localstack/tmp/state/kinesis\",\"shouldPersist\":true},\"registerStreamConsumerDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"shardLimit\":100,\"splitShardDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"startStreamEncryptionDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"stopStreamEncryptionDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"updateShardCountDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"}} Logging Cache Config\r\nlocalstack | 2023-08-06T03:56:32.986 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.986197Z Starting Kinesis TLS Mock Service on port 34279\r\nlocalstack | 2023-08-06T03:56:32.987 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.986862Z Starting Kinesis Plain Mock Service on port 42209\r\nlocalstack | 2023-08-06T03:56:32.994 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.994001Z contextId=1d81ef53-1648-4fbc-8b16-f09375d77ece Starting persist data loop\r\nlocalstack | 2023-08-06T03:56:33.215 DEBUG --- [uncthread158] l.s.c.resource_provider : Executing callback method for AWS::DynamoDB::Table:ddbFooTabletable735E488F\r\nlocalstack | 2023-08-06T03:56:33.330 DEBUG --- [uncthread158] l.s.c.e.template_deployer : Error applying changes for CloudFormation stack \"stack-ddbFooTableNestedStackdd-f1d922ad\": 'NoneType' object has no attribute 'get' Traceback (most recent call last):\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 965, in _run\r\nlocalstack | self.do_apply_changes_in_loop(changes, stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1011, in do_apply_changes_in_loop\r\nlocalstack | should_deploy = self.prepare_should_deploy_change(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1105, in prepare_should_deploy_change\r\nlocalstack | resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 478, in _resolve_refs_recursively\r\nlocalstack | value[key] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 478, in _resolve_refs_recursively\r\nlocalstack | value[key] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 497, in _resolve_refs_recursively\r\nlocalstack | value[i] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 290, in _resolve_refs_recursively\r\nlocalstack | resolved_getatt = get_attr_from_model_instance(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 102, in get_attr_from_model_instance\r\nlocalstack | attribute = attribute.get(part)\r\nlocalstack | AttributeError: 'NoneType' object has no attribute 'get'\r\nlocalstack |\r\nlocalstack |\r\nlocalstack | 2023-08-06T03:56:33.426 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:33.445 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:38.447 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:38.458 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:43.464 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:43.473 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:48.479 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:48.489 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:53.495 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:53.502 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:58.487 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:58.495 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:58.743 DEBUG --- [uncthread154] l.s.c.e.template_deployer : Error applying changes for CloudFormation stack \"stack\": Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once Traceback (most recent call last):\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 965, in _run\r\nlocalstack | self.do_apply_changes_in_loop(changes, stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1039, in do_apply_changes_in_loop\r\nlocalstack | self.apply_change(change, stack=stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1152, in apply_change\r\nlocalstack | progress_event = executor.deploy_loop(resource_provider_payload) # noqa\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 572, in deploy_loop\r\nlocalstack | event = self.execute_action(resource_provider, payload)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 638, in execute_action\r\nlocalstack | return resource_provider.create(request)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 350, in create\r\nlocalstack | return self.create_or_delete(request)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 499, in create_or_delete\r\nlocalstack | result_handler(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/models/cloudformation.py\", line 55, in _handle_result\r\nlocalstack | connect_to().cloudformation.get_waiter(\"stack_create_complete\").wait(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/waiter.py\", line 55, in wait\r\nlocalstack | Waiter.wait(self, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/waiter.py\", line 375, in wait\r\nlocalstack | raise WaiterError(\r\nlocalstack | botocore.exceptions.WaiterError: Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once\r\nlocalstack |\r\nlocalstack |\r\nlocalstack | 2023-08-06T03:57:03.504 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:57:03.511 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:57:03.520 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\n```\r\n\r\nI am using the CDK Table class with a stream enabled, which requires replicas to enable the stream. There's no creative way around this issue that I'm aware of, due to the stack failing before the fully deployment of the Table class.\n\n### Expected Behavior\n\nI'm expecting the build to succeed locally, so I can continue with our stack deployment chain for our local environments.\r\n\r\nThe behavior is not present on `localstack/localstack-pro:2.2.0` or any release prior to the update this week. I am successfully able to deploy to live environments with the same CDK stack without error.\n\n### How are you starting LocalStack?\n\nWith a docker-compose file\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\ndocker-compose up\r\n \r\n```\r\nversion: \"3.8\"\r\n\r\nnetworks:\r\n ls:\r\n name: ls\r\n\r\nservices:\r\n localstack:\r\n container_name: \"${LOCALSTACK_DOCKER_NAME-localstack}\"\r\n environment:\r\n - DEBUG=${DEBUG-1}\r\n - DISABLE_CORS_CHECKS=1\r\n - DISABLE_CUSTOM_CORS_APIGATEWAY=1\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n - EXTRA_CORS_ALLOWED_ORIGINS=*\r\n - MAIN_DOCKER_NETWORK=ls\r\n - PERSISTENCE=${PERSISTENCE-}\r\n env_file:\r\n - ./localstack.local.env\r\n image: \"localstack/localstack-pro:${LOCALSTACK_VERSION-latest}\"\r\n networks:\r\n - ls\r\n ports:\r\n - \"127.0.0.1:4566:4566\" # LocalStack Gateway\r\n - \"127.0.0.1:4510-4559:4510-4559\" # external services port range\r\n - \"127.0.0.1:53:53\" # DNS config (required for Pro)\r\n - \"127.0.0.1:53:53/udp\" # DNS config (required for Pro)\r\n - \"127.0.0.1:443:443\" # LocalStack HTTPS Gateway (required for Pro)\r\n volumes:\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n\r\n```\r\n\r\n.env file contains:\r\n\r\n```\r\nLOCALSTACK_API_KEY=xxxxxxxxxx\r\n```\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\nnpx cdklocal deploy api\r\n\r\n```\r\nconst account = process.env.CDK_ACCOUNT || \"000000000000\";\r\nconst region = \"us-west-2\";\r\n\r\nnew APIStack(app, \"api\", {\r\n crossRegionReferences: true,\r\n env: { account, region },\r\n stackName: \"api\",\r\n});\r\n```\r\n\r\n```\r\nnew DynamoStack(this, \"ddbFooTable\", {\r\n billingMode: BillingMode.PROVISIONED,\r\n deletionProtection: false,\r\n encryption: TableEncryption.AWS_MANAGED,\r\n partitionKey: { name: \"id\", type: AttributeType.STRING },\r\n pointInTimeRecovery: false,\r\n removalPolicy: false\r\n ? RemovalPolicy.RETAIN\r\n : RemovalPolicy.DESTROY,\r\n replicationRegions: [\"us-west-1\"],\r\n stream: StreamViewType.NEW_AND_OLD_IMAGES,\r\n tableName: \"foo\",\r\n});\r\n```\r\n\n\n### Environment\n\n```markdown\n- OS: OSX 13.4\r\n- LocalStack: latest (70e077bf43491cc0954698c1240159caa9cecc0ac6652b890b52aaf0801d5fcb)\r\n- aws-cdk-lib: 2.90.0\r\n- aws-cdk-local: 2.18.0\n```\n\n\n### Anything else?\n\nUnfortunately, I'm in a position where I need the latest update due to Cognito User Pool domains not being functional in prior releases. I'm trying to get our devs to authenticate locally with an Oauth2 IDP flow instead of forcing them to authenticate with a live-stable environment. More information [here](https://github.com/localstack/localstack/issues/8700).", "code": null, "pr_html_url": "https://github.com/localstack/localstack/pull/8882", "commit_html_url": null, "file_loc": "{'base_commit': '78f635ad3a8f819645f3991dfd244ff09f06a7f0', 'files': [{'path': 'localstack/services/cloudformation/engine/template_deployer.py', 'status': 'modified', 'Loc': {\"(None, 'get_attr_from_model_instance', 75)\": {'add': [100, 101]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/cloudformation/engine/template_deployer.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "localstack", "repo_name": "localstack", "base_commit": "f4a188b6d51155a0831a3246f1d8e4f4be835861", "is_iss": 0, "iss_html_url": "https://github.com/localstack/localstack/issues/4652", "iss_label": "type: bug\nstatus: triage needed", "title": "bug: LAMBDA_DOCKER_FLAGS doesn't work with -e", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nAttempting to add environment variables to containers created to service lambda requests no longer works in the latest version of localstack. This works in localStack version 0.12.16\r\n\r\n### Expected Behavior\r\n\r\nSetting the environment variable `LAMBDA_DOCKER_FLAGS=-e TEST_VAL=True` on localstack's docker container will result in spawned containers created for serving lambda functions having the environment variable TEST_VAL set to True.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n1. Run `docker-compose up -d` with the following docker-compose.yml\r\n```yaml\r\nversion: '2.1'\r\n\r\nservices:\r\n localstack_ltest:\r\n container_name: \"ltest\"\r\n image: localstack/localstack:0.12.18\r\n ports:\r\n - \"4566:4566\"\r\n environment:\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n - LOCALSTACK_API_KEY=\r\n - LAMBDA_EXECUTOR=docker-reuse\r\n - LAMBDA_DOCKER_FLAGS=-e TEST_VAL=True\r\n volumes:\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n restart: always\r\n```\r\n2. Create the file logs-template.yml\r\n```yaml\r\n---\r\nAWSTemplateFormatVersion: '2010-09-09'\r\nResources: \r\n LambdaFunctionLogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties: \r\n RetentionInDays: 60\r\n LogGroupName: !Join [\"\", [\"/aws/lambda/\", !Ref LambdaFunction]]\r\n LambdaFunctionRole:\r\n Type: AWS::IAM::Role\r\n Properties:\r\n AssumeRolePolicyDocument:\r\n Version: '2012-10-17'\r\n Statement:\r\n - Effect: Allow\r\n Principal:\r\n Service:\r\n - lambda.amazonaws.com\r\n Action:\r\n - sts:AssumeRole\r\n Path: /\r\n Policies:\r\n - PolicyName: LambdaRolePolicy\r\n PolicyDocument:\r\n Statement:\r\n - Effect: Allow\r\n Action:\r\n - 'logs:*'\r\n Resource: 'arn:aws:logs:*:*:*'\r\n LambdaFunction:\r\n Type: AWS::Lambda::Function\r\n Properties:\r\n FunctionName: \"test-function\"\r\n Role: !GetAtt LambdaFunctionRole.Arn\r\n Handler: index.lambda_handler\r\n Runtime: python3.8\r\n Code:\r\n ZipFile: |\r\n import os\r\n def lambda_handler(event, context):\r\n print(\"environ: \" + str(os.environ))\r\n\r\n\r\n```\r\n3. Run ` aws cloudformation deploy --stack-name test --template-file .\\logs-template.yml --endpoint-url http://127.0.0.1:4566 --region us-east-1`\r\n4. Run `aws --endpoint-url http://127.0.0.1:4566 --region us-east-1 lambda invoke --function-name test-function out.txt`\r\n5. Run `aws --endpoint-url=http://localhost:4566 --region us-east-1 logs tail /aws/lambda/test-function`\r\n6. Check for `\"TEST_VAL\": \"True\",` being in the output of the above command.\r\n\r\n### Environment\r\n\r\n```markdown\r\nCurrent configuration (broken)\r\n- OS: Ubuntu 20.04\r\n- LocalStack version: 0.12.18\r\n- LocalStack build date: 2021-09-27\r\n- LocalStack build git hash: 00797f9e\r\n\r\nWorking configuration\r\n- OS: Ubuntu 20.04\r\n- LocalStack version: 0.12.16\r\n- LocalStack Docker container id: b0137bad2045\r\n- LocalStack build date: 2021-07-31\r\n- LocalStack build git hash: f1262f74\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/localstack/localstack/commit/f4a188b6d51155a0831a3246f1d8e4f4be835861", "file_loc": "{'base_commit': 'f4a188b6d51155a0831a3246f1d8e4f4be835861', 'files': [{'path': 'localstack/services/awslambda/lambda_executors.py', 'status': 'modified', 'Loc': {\"('LambdaExecutorContainers', 'run_lambda_executor', 495)\": {'mod': [543, 544]}}}, {'path': 'tests/integration/test_lambda.py', 'status': 'modified', 'Loc': {\"('TestLambdaBaseFeatures', 'test_large_payloads', 477)\": {'add': [492]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/awslambda/lambda_executors.py"], "doc": [], "test": ["tests/integration/test_lambda.py"], "config": [], "asset": []}} -{"organization": "localstack", "repo_name": "localstack", "base_commit": "dec1ba1b94153a4380cc94a0c8bd805f8922b6e3", "is_iss": 0, "iss_html_url": "https://github.com/localstack/localstack/issues/3202", "iss_label": "status: triage needed\narea: configuration", "title": "Illegal path is passed into the HEAD request during the object download", "body": "# Type of request: This is a ...\n[ ] bug report\n[ ] feature request\n\n# Detailed description\nI have tried to upgrade localstack to 0.11.5(and after) and use the service port 4566.\nOn local, I got passes to all tests we have been doing.\nBut on CircleCI, I got errors when download object from s3.\n\n```\nAn error occurred (404) when calling the HeadObject operation: Not Found\n```\n\nIt goes through to 0.11.4, so I'm sure it's a bug, but what do you think?\nWould you have any advice?\n\n## Expected behavior\nMy test does as below\n\n1. upload object to s3 bucket named \"test-bucket\"\n1. list v2 for the bucket\n1. download it\n\nSo, I expected to succeed to this.\nOf course, this test has been passed until localstack upgrade.\n\n## Actual behavior\nAt CircleCI, I got below.\nSomehow \"/test-bucket/test-bucket\" is being passed as a HEAD request parameter at download time.\nThis path is double of bucket name \"/test-bucket\".\n\n```\n:\n2020-10-30 05:47:54,523:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"PUT /test-bucket HTTP/1.1\" 200 -\n2020-10-30 05:47:54,536:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"PUT /test-bucket/loadable/2020/04/06/000000_2_e462a109-916a-4b8f-b393-f5b01a6a8c10 HTTP/1.1\" 200 -\n2020-10-30 05:47:54,554:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"GET /test-bucket?list-type=2&max-keys=200&prefix=loadable%2F HTTP/1.1\" 200 -\n2020-10-30 05:47:54,566:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"GET /test-bucket/loadable/2020/04/06/000000_2_e462a109-916a-4b8f-b393-f5b01a6a8c10 HTTP/1.1\" 206 -\n2020-10-30 05:47:54,578:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"HEAD /test-bucket/test-bucket HTTP/1.1\" 404 -\n2020-10-30T05:47:54:WARNING:bootstrap.py: Thread run method ._run at 0x7f333a234f70>(None) failed: An error occurred (404) when calling the HeadObject operation: Not Found Traceback (most recent call last):\n File \"/opt/code/localstack/localstack/utils/bootstrap.py\", line 534, in run\n result = self.func(self.params)\n File \"/opt/code/localstack/localstack/utils/async_utils.py\", line 28, in _run\n return fn(*args, **kwargs)\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 560, in handler\n response = modify_and_forward(method=method, path=path_with_params, data_bytes=data, headers=headers,\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 333, in modify_and_forward\n listener_result = listener.forward_request(method=method,\n File \"/opt/code/localstack/localstack/services/edge.py\", line 81, in forward_request\n return do_forward_request(api, port, method, path, data, headers)\n File \"/opt/code/localstack/localstack/services/edge.py\", line 86, in do_forward_request\n result = do_forward_request_inmem(api, port, method, path, data, headers)\n File \"/opt/code/localstack/localstack/services/edge.py\", line 106, in do_forward_request_inmem\n response = modify_and_forward(method=method, path=path, data_bytes=data, headers=headers,\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 401, in modify_and_forward\n updated_response = update_listener.return_response(**kwargs)\n File \"/opt/code/localstack/localstack/services/s3/s3_listener.py\", line 1254, in return_response\n fix_range_content_type(bucket_name, path, headers, response)\n File \"/opt/code/localstack/localstack/services/s3/s3_listener.py\", line 465, in fix_range_content_type\n result = s3_client.head_object(Bucket=bucket_name, Key=key_name)\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 676, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found\n```\n\n# Steps to reproduce\n## Command used to start LocalStack\nsorry...\n\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\nsorry...\n\n\n\n\u2506Issue is synchronized with this [Jira Task](https://localstack.atlassian.net/browse/LOC-71) by [Unito](https://www.unito.io/learn-more)\n", "code": null, "pr_html_url": "https://github.com/localstack/localstack/pull/3370", "commit_html_url": null, "file_loc": "{'base_commit': 'dec1ba1b94153a4380cc94a0c8bd805f8922b6e3', 'files': [{'path': 'localstack/services/s3/s3_listener.py', 'status': 'modified', 'Loc': {\"(None, 'uses_path_addressing', 891)\": {'mod': [892]}}}, {'path': 'tests/integration/test_s3.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [36]}, \"('S3ListenerTest', None, 59)\": {'add': [1454]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/s3/s3_listener.py"], "doc": [], "test": ["tests/integration/test_s3.py"], "config": [], "asset": []}} -{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "7a53ba3dad9f7b2e31dac3fbb3162838eb9441c6", "is_iss": 0, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/58", "iss_label": "", "title": "Please support Azure OpenAI", "body": null, "code": null, "pr_html_url": "https://github.com/openinterpreter/open-interpreter/pull/62", "commit_html_url": null, "file_loc": "{'base_commit': '7a53ba3dad9f7b2e31dac3fbb3162838eb9441c6', 'files': [{'path': 'interpreter/cli.py', 'status': 'modified', 'Loc': {\"(None, 'cli', 4)\": {'add': [28, 39]}}}, {'path': 'interpreter/interpreter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [52]}, \"('Interpreter', '__init__', 62)\": {'add': [69]}, \"('Interpreter', 'verify_api_key', 255)\": {'add': [263], 'mod': [260, 262, 271, 272, 275, 276, 278, 279, 280, 281, 284, 285, 286, 287, 289]}, \"('Interpreter', 'respond', 296)\": {'add': [340], 'mod': [312, 313, 314, 315, 316, 317, 318]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/cli.py", "interpreter/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "e69269d844b7089dec636516d6edb4f70911ebf6", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/54", "iss_label": "", "title": "Support OPENAI_ API_ BASE for proxy URLs", "body": "How to add OPENAI_ API_ BASE code to use other proxy keys\uff1f", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/abi/screenshot-to-code/commit/e69269d844b7089dec636516d6edb4f70911ebf6", "file_loc": "{'base_commit': 'e69269d844b7089dec636516d6edb4f70911ebf6', 'files': [{'path': 'backend/image_generation.py', 'status': 'modified', 'Loc': {\"(None, 'process_tasks', 8)\": {'mod': [8, 9]}, \"(None, 'generate_image', 23)\": {'mod': [23, 24]}, \"(None, 'generate_images', 63)\": {'mod': [63, 90]}}}, {'path': 'backend/llm.py', 'status': 'modified', 'Loc': {\"(None, 'stream_openai_response', 8)\": {'mod': [9, 11]}}}, {'path': 'backend/main.py', 'status': 'modified', 'Loc': {\"(None, 'stream_code_test', 62)\": {'add': [75, 85, 119], 'mod': [132]}}}, {'path': 'frontend/src/App.tsx', 'status': 'modified', 'Loc': {'(None, None, 39)': {'add': [39]}}}, {'path': 'frontend/src/components/SettingsDialog.tsx', 'status': 'modified', 'Loc': {'(None, None, 78)': {'add': [78]}}}, {'path': 'frontend/src/types.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/src/App.tsx", "frontend/src/types.ts", "backend/main.py", "frontend/src/components/SettingsDialog.tsx", "backend/image_generation.py", "backend/llm.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "d363cf4639aacdaefbb8f69919f3c787a4519b7b", "is_iss": 0, "iss_html_url": "https://github.com/pytorch/pytorch/issues/38479", "iss_label": "triaged\nmodule: numpy", "title": "torch.einsum does not pass equation argument to __torch_function__ API", "body": "## \ud83d\udc1b Bug\r\n\r\nwhen delegating torch.einsum call to an object which implements\r\n`__torch_function__` API the equation argument is not passed resulting in the error.\r\n```TypeError: einsum(): argument 'equation' (position 1) must be str, not Tensor```\r\n\r\nthis was tested on pytorch 1.5.0\r\n\r\nI've actually found the cause of this bug and have written a fix.\r\n\r\nthe following script illustrates the problem and the proposed solution\r\n\r\n## To Reproduce\r\n\r\n```python \r\nimport torch\r\n\r\nclass Wrapper():\r\n def __init__(self,data):\r\n self.data = data\r\n \r\n def __torch_function__(self, func, types, args=(), kwargs=None):\r\n if kwargs is None:\r\n kwargs = {}\r\n\r\n #unwrap inputs if necessary\r\n def unwrap(v):\r\n return v.data if isinstance(v,Wrapper) else v\r\n args = map(unwrap,args)\r\n kwargs = {k:unwrap(v) for k,v in kwargs.items()}\r\n\r\n return func(*args, **kwargs)\r\n\r\n\r\n\r\n# fixed einsum implementation\r\nfrom torch import Tensor,_VF\r\nfrom torch._overrides import has_torch_function,handle_torch_function\r\ndef fixed_einsum(equation,*operands):\r\n if not torch.jit.is_scripting():\r\n if any(type(t) is not Tensor for t in operands) and has_torch_function(operands):\r\n # equation is not passed\r\n # return handle_torch_function(einsum, operands, *operands)\r\n return handle_torch_function(fixed_einsum, operands, equation,*operands)\r\n if len(operands) == 1 and isinstance(operands[0], (list, tuple)):\r\n # the old interface of passing the operands as one list argument\r\n operands = operands[0]\r\n\r\n # recurse incase operands contains value that has torch function\r\n #in the original implementation this line is omitted\r\n return fixed_einsum(equation,*operands)\r\n\r\n return _VF.einsum(equation, operands)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print(torch.__version__)\r\n # uncomment to use fixed einsum\r\n # torch.einsum = fixed_einsum\r\n\r\n #operands are wrapped\r\n x = Wrapper(torch.randn(5))\r\n y = Wrapper(torch.randn(4))\r\n assert torch.allclose(torch.einsum('i,j->ij',x, y),torch.ger(x,y)) # outer product\r\n print(\"works with wrapped inputs\") \r\n\r\n #old interface operands is a list\r\n a = Wrapper(torch.randn(2,3))\r\n b = Wrapper(torch.randn(5,3,7))\r\n c = Wrapper(torch.randn(2,7))\r\n assert torch.allclose(torch.einsum('ik,jkl,il->ij', [a, b, c]),torch.nn.functional.bilinear(a,c,b)) # bilinear interpolation\r\n print(\"works with old API operands is list\")\r\n \r\n #equation is wrapped\r\n As = Wrapper(torch.randn(3,2,5))\r\n Bs = Wrapper(torch.randn(3,5,4))\r\n equation = Wrapper('bij,bjk->bik')\r\n assert torch.allclose(torch.einsum(equation, As, Bs),torch.matmul(As,Bs)) # batch matrix multiplication\r\n print(\"works with equation wrapped\")\r\n\r\n #see that it also works with plain tensors\r\n x = torch.randn(5)\r\n y = torch.randn(4)\r\n assert torch.allclose(torch.einsum('i,j->ij',x, y),torch.ger(x,y)) \r\n print(\"works with no wrapped values\")\r\n\r\n\r\n\r\n```\r\n\r\n\r\ncc @albanD @mruberry", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/d363cf4639aacdaefbb8f69919f3c787a4519b7b", "file_loc": "{'base_commit': 'd363cf4639aacdaefbb8f69919f3c787a4519b7b', 'files': [{'path': 'test/test_overrides.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [490]}}}, {'path': 'torch/_overrides.py', 'status': 'modified', 'Loc': {\"(None, 'get_testing_overrides', 144)\": {'mod': [264]}}}, {'path': 'torch/functional.py', 'status': 'modified', 'Loc': {\"(None, 'einsum', 222)\": {'add': [300], 'mod': [297]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/functional.py", "torch/_overrides.py"], "doc": [], "test": ["test/test_overrides.py"], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "81a4aeabdf9d550ceda52a5060f19568de61b265", "is_iss": 0, "iss_html_url": "https://github.com/pytorch/pytorch/issues/93667", "iss_label": "triaged\ntracker\noncall: pt2\nmodule: dynamo", "title": "14k github models on PyTorch 2.0 pass rates dashboard ", "body": "We are weekly running dynamo-eager, dynamo-eager-fullgraph, export and inductor on 14k ```nn.Modules``` crawled from 1.4k github projects to get coverage level, find and fix bugs. This dashboard page is used to track the pass rates of different backends. \r\n\r\nHow to repro:\r\n* Checkout https://github.com/jansel/pytorch-jit-paritybench\r\n* Batch evaluation with different backends:\r\n * dynamo-eager: ```python main.py --backend eager```\r\n * dynamo-eager-fullgraph: ```python main.py --backend eager --fullgraph```\r\n * export: ```python main.py --compile_mode export```\r\n * inductor: ```python main.py```\r\n* Adhoc evaluation:\r\n * ```pytest ./generated/{filename}.py -k test_{n}``` (e.g, ```pytest ./generated/test_KunpengLi1994_VSRN.py -k test_002```)\r\n * ```python -e ./generated/{filename}.py --backend eager```\r\n\r\nBugs umbrella task(#92670)\r\n\r\ncc @ezyang @msaroufim @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @soumith @wconstab @ngimel @Xia-Weiwen @desertfire @davidberard98", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/81a4aeabdf9d550ceda52a5060f19568de61b265", "file_loc": "{'base_commit': '81a4aeabdf9d550ceda52a5060f19568de61b265', 'files': [{'path': 'test/dynamo/test_misc.py', 'status': 'modified', 'Loc': {\"('MiscTests', None, 40)\": {'add': [2965]}, \"('MiscTests', 'fn', 421)\": {'mod': [422]}, \"('MiscTests', 'test_numel', 420)\": {'mod': [425]}}}, {'path': 'torch/_dynamo/variables/tensor.py', 'status': 'modified', 'Loc': {\"('TensorVariable', 'call_method', 178)\": {'mod': [206]}}}, {'path': 'torch/_dynamo/variables/torch.py', 'status': 'modified', 'Loc': {\"('TorchVariable', 'can_constant_fold_through', 159)\": {'add': [165]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/_dynamo/variables/torch.py", "torch/_dynamo/variables/tensor.py"], "doc": [], "test": ["test/dynamo/test_misc.py"], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "aac9e5288f7a9666884705e2b716c260cb5f9afc", "is_iss": 0, "iss_html_url": "https://github.com/pytorch/pytorch/issues/67002", "iss_label": "module: windows\nmodule: multiprocessing\ntriaged\nskipped", "title": "DISABLED test_fs_sharing (__main__.TestMultiprocessing)", "body": "Flaky failures in the last week: https://fburl.com/scuba/opensource_ci_jobs/inmj698k. They only appear to be on windows\r\n\r\nPlatforms: rocm\r\n\r\ncc @peterjc123 @mszhanyi @skyline75489 @nbcsm @VitalyFedyunin", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/aac9e5288f7a9666884705e2b716c260cb5f9afc", "file_loc": "{'base_commit': 'aac9e5288f7a9666884705e2b716c260cb5f9afc', 'files': [{'path': 'test/test_multiprocessing.py', 'status': 'modified', 'Loc': {\"('TestMultiprocessing', 'test_receive', 289)\": {'add': [293]}, '(None, None, None)': {'mod': [19, 27]}, \"('TestMultiprocessing', 'test_fill', 259)\": {'mod': [269, 270]}, \"('TestMultiprocessing', None, 251)\": {'mod': [361]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": ["test/test_multiprocessing.py"], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "e37a22128eca7ccac6e289659587a9e1bfe6d242", "is_iss": 0, "iss_html_url": "https://github.com/pytorch/pytorch/issues/15052", "iss_label": "oncall: jit", "title": "Tracer doesn't work with join/wait", "body": "Reported error: `RuntimeError: output 0 of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer.`\r\n\r\nTo reproduce:\r\n```python\r\ndef test_async_script_trace(self):\r\n class Module(torch.jit.ScriptModule):\r\n def __init__(self):\r\n super(Module, self).__init__(False)\r\n\r\n @torch.jit.script_method\r\n def forward(self, x):\r\n future = torch.jit._fork(torch.neg, x)\r\n outputs = []\r\n outputs.append(torch.jit._wait(future))\r\n\r\n return outputs\r\n\r\n class Tuple(nn.Module):\r\n def __init__(self):\r\n super(Tuple, self).__init__()\r\n self.module = Module()\r\n\r\n def forward(self, x):\r\n return tuple(self.module(x))\r\n\r\n x = torch.rand(3, 4)\r\n module = torch.jit.trace(Tuple(), (x), _force_outplace=True)\r\n self.assertEqual(module(x), torch.neg(x))", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/e37a22128eca7ccac6e289659587a9e1bfe6d242", "file_loc": "{'base_commit': 'e37a22128eca7ccac6e289659587a9e1bfe6d242', 'files': [{'path': 'aten/src/ATen/core/jit_type.h', 'status': 'modified', 'Loc': {\"(None, 'FutureType', 516)\": {'add': [534]}}}, {'path': 'test/test_jit.py', 'status': 'modified', 'Loc': {\"('TestAsync', None, 11055)\": {'add': [11224]}}}, {'path': 'torch/csrc/jit/graph_executor.cpp', 'status': 'modified', 'Loc': {'(None, None, 505)': {'mod': [516, 530, 531, 532, 533]}}}, {'path': 'torch/csrc/jit/tracer.cpp', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [39]}}}, {'path': 'torch/csrc/jit/tracer.h', 'status': 'modified', 'Loc': {\"(None, 'function', 42)\": {'add': [44]}, \"(None, 'tracer', 24)\": {'mod': [35, 36, 37, 38]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/csrc/jit/tracer.h", "torch/csrc/jit/tracer.cpp", "aten/src/ATen/core/jit_type.h", "torch/csrc/jit/graph_executor.cpp"], "doc": [], "test": ["test/test_jit.py"], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "0c091380cc03b23e68dde7368f3b910c21deb010", "is_iss": 0, "iss_html_url": "https://github.com/pytorch/pytorch/issues/21680", "iss_label": "high priority\nmodule: cudnn\nmodule: nn\ntriaged\nsmall", "title": "Disable nondeterministic CTCLoss from cuDNN", "body": "## \ud83d\udc1b Bug\r\n\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.I i updated pytorch version and ctc\uff0cuse pytorch_nightly, but in my train ,nn.CTCloss() is still zero,so,i would like to ask if the version pytorch(nightly) has been solved this problem\r\n1.\r\n1.\r\n\r\n\r\n\r\n## Expected behavior\r\n\r\n\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/0c091380cc03b23e68dde7368f3b910c21deb010", "file_loc": "{'base_commit': '0c091380cc03b23e68dde7368f3b910c21deb010', 'files': [{'path': 'aten/src/ATen/native/LossCTC.cpp', 'status': 'modified', 'Loc': {\"(None, 'ctc_loss', 341)\": {'mod': [367]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code\nC"}, "loctype": {"code": ["aten/src/ATen/native/LossCTC.cpp"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "5d08c7201fa5b4641f4277bf248c944b2c297b94", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/843", "iss_label": "bug", "title": "permission denied", "body": "**Bug description**\r\nI'm running basic G4F code from README:\r\n```\r\nimport g4f\r\n\r\n\r\nresponse = g4f.ChatCompletion.create(\r\n model=g4f.models.gpt_4,\r\n messages=[{\"role\": \"user\", \"content\": \"hi\"}],\r\n) # alterative model setting\r\n\r\nprint(response)\r\n```\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\IonE\\Desktop\\main.py\", line 3, in \r\n import g4f\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\__init__.py\", line 1, in \r\n from . import models\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\models.py\", line 3, in \r\n from .Provider import Bard, BaseProvider, GetGpt, H2o, Liaobots, Vercel, Equing\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\__init__.py\", line 6, in \r\n from .Bard import Bard\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\Bard.py\", line 11, in \r\n class Bard(AsyncProvider):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\Bard.py\", line 22, in Bard\r\n cookies: dict = get_cookies(\".google.com\"),\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\base_provider.py\", line 45, in get_cookies\r\n for cookie in browser_cookie3.load(cookie_domain):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 1233, in load\r\n for cookie in cookie_fn(domain_name=domain_name):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 1160, in chrome\r\n return Chrome(cookie_file, domain_name, key_file).load()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 489, in load\r\n with _DatabaseConnetion(self.cookie_file) as con:\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 349, in __enter__\r\n return self.get_connection()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 383, in get_connection\r\n con = method()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 374, in __get_connection_legacy\r\n shutil.copyfile(self.__database_file, self.__temp_cookie_file)\r\n File \"D:\\Program Files\\Python399\\lib\\shutil.py\", line 264, in copyfile\r\n with open(src, 'rb') as fsrc:\r\nPermissionError: [Errno 13] Permission denied: 'C:\\\\Users\\\\IonE\\\\AppData\\\\Roaming\\\\..\\\\Local\\\\Google\\\\Chrome\\\\User Data\\\\Default\\\\Network\\\\Cookies'\r\n```\r\n\r\n**Environement**\r\n- python 3.9.9\r\n- ukraine", "code": null, "pr_html_url": "https://github.com/xtekky/gpt4free/pull/847", "commit_html_url": null, "file_loc": "{'base_commit': '5d08c7201fa5b4641f4277bf248c944b2c297b94', 'files': [{'path': 'g4f/Provider/Bard.py', 'status': 'modified', 'Loc': {\"('Bard', 'create_async', 17)\": {'add': [33], 'mod': [22]}}}, {'path': 'g4f/Provider/Bing.py', 'status': 'modified', 'Loc': {\"('Bing', 'create_async_generator', 21)\": {'add': [34], 'mod': [24]}}}, {'path': 'g4f/Provider/Hugchat.py', 'status': 'modified', 'Loc': {\"('Hugchat', 'create_completion', 17)\": {'add': [25], 'mod': [23]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Bing.py", "g4f/Provider/Hugchat.py", "g4f/Provider/Bard.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "3430b04f870d982d7fba34e3b9d6e5cf3bd3b847", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1003", "iss_label": "bug", "title": "please delete site chat.aivvm.com", "body": "**Known Issues** // delete this\r\nplease delete site `chat.aivvm.com`\r\n\r\n**Delete site description**\r\nGpt4free maintainers, I am the administrator of chat.aivvm.com and request to delete this site. My website is already under high load\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/xtekky/gpt4free/commit/3430b04f870d982d7fba34e3b9d6e5cf3bd3b847", "file_loc": "{'base_commit': '3430b04f870d982d7fba34e3b9d6e5cf3bd3b847', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 218)': {'mod': [218]}, '(None, None, 281)': {'mod': [281]}, '(None, None, 374)': {'mod': [374]}}}, {'path': 'etc/testing/test_chat_completion.py', 'status': 'modified', 'Loc': {\"(None, 'run_async', 19)\": {'mod': [22]}}}, {'path': 'g4f/Provider/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}}}, {'path': 'g4f/Provider/Aivvm.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'mod': [3, 4, 5, 21, 22]}}}, {'path': 'g4f/Provider/deprecated/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14]}}}, {'path': 'g4f/models.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23, 47, 56, 66, 170, 171, 172, 173, 178, 179, 183, 184, 188, 189]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u7528\u6237\u8bf7\u6c42\u9879\u76ee\u5220\u9664\u81ea\u5df1\u7f51\u5740", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Aivvm.py", "g4f/Provider/deprecated/__init__.py", "g4f/Provider/__init__.py", "g4f/models.py"], "doc": ["README.md"], "test": ["etc/testing/test_chat_completion.py"], "config": [], "asset": []}} -{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "is_iss": 0, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/188", "iss_label": "fixed", "title": "can not run hackingtool !!!!! SyntaxError: invalid syntax ???", "body": "# python3 ./hackingtool.py\r\n\r\nTraceback (most recent call last):\r\n File \"/root/hackingtool/./hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/root/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\n__________________________________________________________________________________\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "is_iss": 0, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/177", "iss_label": "", "title": "Help me please", "body": "Traceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "is_iss": 0, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/189", "iss_label": "", "title": "look", "body": "![image](https://user-images.githubusercontent.com/93758292/155468650-b9d57e21-6c82-4005-a3ee-1783699e7f11.png)\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "is_iss": 0, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/209", "iss_label": "", "title": "Running Issue", "body": "**Describe the bug**\nA clear and concise description of what the bug is.\n\nI have installed this tool successfully though when i go to run it with or it comes up with the error I have attached as a screenshot. Why would this be happening?\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Go to '...'\n2. Click on '....'\n3. Scroll down to '....'\n4. See error\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n![image](https://user-images.githubusercontent.com/13176339/161368818-26bf3219-9ba7-4bab-a451-65d379c6d405.jpeg)\n\n**Desktop (please complete the following information):**\n - OS: Kali\n - Browser [e.g. chrome, safari]\n - Version [e.g. 22]\n\n**Smartphone (please complete the following information):**\n - Device: rpi4\n - OS: [e.g. iOS8.1]\n - Browser [e.g. stock browser, safari]\n - Version [e.g. 22]\n\n**Additional context**\nAdd any other context about the problem here.", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "is_iss": 0, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/185", "iss_label": "fixed", "title": "SyntaxError: invalid syntax traceback most recent call last", "body": "\u250c\u2500\u2500(root\ud83d\udc80localhost)-[~]\r\n\u2514\u2500# hackingtool\r\nTraceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\n\r\n\r\n\r\nthis happens when i type in \"hackingtool\" in terminal. any fixes?", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "is_iss": 0, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/187", "iss_label": "fixed", "title": "from tools.ddos import DDOSTools", "body": "# sudo hackingtool\r\nTraceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "bf0886bae0ccbc8c5d285b6e2affe7e40474f970", "is_iss": 0, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/16924", "iss_label": "Bug\nEasy\nmodule:metrics", "title": "Matthews correlation coefficient metric throws misleading division by zero RuntimeWarning", "body": "#### Description\r\nWith tested values all equal, `sklearn.metrics.matthews_corrcoef` throws a `RuntimeWarning` reporting a division by zero. This behavior was already reported in #1937 and reported fixed, but reappears in recent versions.\r\n\r\n#### Steps/Code to Reproduce\r\nThe snippet below reproduces the warning.\r\n```python\r\nimport sklearn.metrics \r\ntrues = [1,0,1,1,0] \r\npreds = [0,0,0,0,0] \r\nsklearn.metrics.matthews_corrcoef(trues, preds)\r\n```\r\n\r\n#### Expected Results\r\nNo warning is thrown.\r\n\r\n#### Actual Results\r\nThe following warning is thrown:\r\n```\r\nC:\\anaconda\\envs\\sklearn-test\\lib\\site-packages\\sklearn\\metrics\\_classification.py:900: RuntimeWarning: invalid value encountered in double_scalars\r\n mcc = cov_ytyp / np.sqrt(cov_ytyt * cov_ypyp)\r\n```\r\n\r\n#### Versions\r\n```\r\nSystem:\r\n python: 3.8.2 (default, Mar 25 2020, 08:56:29) [MSC v.1916 64 bit (AMD64)]\r\nexecutable: C:\\anaconda\\envs\\sklearn-test\\python.exe\r\n machine: Windows-10-10.0.18362-SP0\r\n\r\nPython dependencies:\r\n pip: 20.0.2\r\nsetuptools: 46.1.3.post20200330\r\n sklearn: 0.22.1\r\n numpy: 1.18.1\r\n scipy: 1.4.1\r\n Cython: None\r\n pandas: None\r\nmatplotlib: None\r\n joblib: 0.14.1\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/19977", "commit_html_url": null, "file_loc": "{'base_commit': 'bf0886bae0ccbc8c5d285b6e2affe7e40474f970', 'files': [{'path': 'sklearn/metrics/_classification.py', 'status': 'modified', 'Loc': {\"(None, 'matthews_corrcoef', 800)\": {'mod': [881, 883, 886]}}}, {'path': 'sklearn/metrics/tests/test_classification.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23, 625]}, \"(None, 'test_matthews_corrcoef', 671)\": {'mod': [687, 688, 690, 691, 694, 696, 697]}, \"(None, 'test_matthews_corrcoef_multiclass', 713)\": {'mod': [734, 737, 738, 739, 757, 761, 762, 763, 765, 766]}}}, {'path': 'sklearn/utils/_testing.py', 'status': 'modified', 'Loc': {\"(None, 'assert_warns_div0', 190)\": {'mod': [190, 191, 193, 194, 196, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/metrics/_classification.py", "sklearn/utils/_testing.py"], "doc": [], "test": ["sklearn/metrics/tests/test_classification.py"], "config": [], "asset": []}} -{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "4ac6a90a82e4a8d7b5338c18ae8a16559c98ba10", "is_iss": 0, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/5101", "iss_label": "", "title": "LatentDirichletAllocation has superfluous attributes", "body": "It has `dirichlet_component_` (undocumented) and `exp_dirichlet_component_` (exponential of same). I propose to get rid of at least the latter.\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/5111", "commit_html_url": null, "file_loc": "{'base_commit': '4ac6a90a82e4a8d7b5338c18ae8a16559c98ba10', 'files': [{'path': 'sklearn/decomposition/online_lda.py', 'status': 'modified', 'Loc': {\"('LatentDirichletAllocation', '_approx_bound', 542)\": {'add': [579], 'mod': [597, 612]}, \"('LatentDirichletAllocation', '_init_latent_vars', 283)\": {'mod': [305, 306, 308]}, \"('LatentDirichletAllocation', '_em_step', 366)\": {'mod': [407, 408]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1\n\u591a\u4f59\u7684\u9009\u9879", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/decomposition/online_lda.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "05123af1b2f8db1bc4f05c22515ef378cbeefbd3", "is_iss": 0, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/76", "iss_label": "Bug", "title": "Sparse cumsum functions do not work", "body": "e.g. SparseSeries.cumsum\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/wesm/pandas/commit/05123af1b2f8db1bc4f05c22515ef378cbeefbd3", "file_loc": "{'base_commit': '05123af1b2f8db1bc4f05c22515ef378cbeefbd3', 'files': [{'path': 'pandas/core/frame.py', 'status': 'modified', 'Loc': {\"('DataFrame', None, 97)\": {'mod': [1962, 1963, 1964, 1966, 1967, 1968, 1969, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 2021, 2022, 2023, 2025, 2026, 2027, 2028, 2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039, 2041, 2043]}}}, {'path': 'pandas/core/generic.py', 'status': 'modified', 'Loc': {\"('PandasGeneric', '_reindex_axis', 162)\": {'add': [168]}}}, {'path': 'pandas/core/series.py', 'status': 'modified', 'Loc': {\"('Series', 'cumsum', 570)\": {'mod': [580, 581, 582, 583, 584, 585, 591, 592, 593, 594]}}}, {'path': 'pandas/core/sparse.py', 'status': 'modified', 'Loc': {\"('SparseSeries', None, 152)\": {'add': [512]}, \"('SparseDataFrame', 'count', 1058)\": {'add': [1059]}, '(None, None, None)': {'mod': [13]}}}, {'path': 'pandas/tests/test_frame.py', 'status': 'modified', 'Loc': {\"('TestDataFrame', None, 539)\": {'add': [2271]}, \"('TestDataFrame', 'test_cumsum', 2271)\": {'add': [2276, 2283, 2284], 'mod': [2273, 2274, 2286, 2287]}}}, {'path': 'pandas/tests/test_sparse.py', 'status': 'modified', 'Loc': {\"('TestSparseSeries', None, 111)\": {'add': [577]}, \"('TestSparseDataFrame', 'test_count', 1065)\": {'add': [1068]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["pandas/core/frame.py", "pandas/core/generic.py", "pandas/core/series.py", "pandas/core/sparse.py"], "doc": [], "test": ["pandas/tests/test_frame.py", "pandas/tests/test_sparse.py"], "config": [], "asset": []}} -{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "65c0441a41b2dcaeebb648274d30978419a8661a", "is_iss": 0, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16607", "iss_label": "Datetime\nCompat", "title": "to_datetime should support ISO week year", "body": "`to_datetime` does not currently seem to support `ISO week year` like `strptime` does:\r\n\r\n```\r\nIn [38]: datetime.date(2016, 1, 1).strftime('%G-%V')\r\nOut[38]: '2015-53'\r\n\r\nIn [39]: datetime.datetime.strptime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', '%G-%V-%u')\r\nOut[39]: datetime.datetime(2015, 12, 28, 0, 0)\r\n\r\nIn [41]: pd.to_datetime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', format='%G-%V-%u')\r\n ---------------------------------------------------------------------------\r\n TypeError Traceback (most recent call last)\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 443 try:\r\n --> 444 values, tz = tslib.datetime_to_datetime64(arg)\r\n 445 return DatetimeIndex._simple_new(values, name=name, tz=tz)\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.datetime_to_datetime64 (pandas/_libs/tslib.c:33275)()\r\n\r\n TypeError: Unrecognized value type: \r\n\r\n During handling of the above exception, another exception occurred:\r\n\r\n ValueError Traceback (most recent call last)\r\n in ()\r\n ----> 1 pd.to_datetime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', format='%G-%V-%u')\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in to_datetime(arg, errors, dayfirst, yearfirst, utc, box, format, exact, unit, infer_datetime_format, origin)\r\n 516 result = _convert_listlike(arg, box, format)\r\n 517 else:\r\n --> 518 result = _convert_listlike(np.array([arg]), box, format)[0]\r\n 519 \r\n 520 return result\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 445 return DatetimeIndex._simple_new(values, name=name, tz=tz)\r\n 446 except (ValueError, TypeError):\r\n --> 447 raise e\r\n 448 \r\n 449 if arg is None:\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 412 try:\r\n 413 result = tslib.array_strptime(arg, format, exact=exact,\r\n --> 414 errors=errors)\r\n 415 except tslib.OutOfBoundsDatetime:\r\n 416 if errors == 'raise':\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.array_strptime (pandas/_libs/tslib.c:63124)()\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.array_strptime (pandas/_libs/tslib.c:63003)()\r\n\r\n ValueError: 'G' is a bad directive in format '%G-%V-%u'\r\n\r\n```\r\n\r\n
    \r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\n\r\npandas: 0.20.1\r\npytest: 3.1.0\r\npip: 9.0.1\r\nsetuptools: 28.8.0\r\nCython: 0.25.2\r\nnumpy: 1.12.1\r\nscipy: 0.19.0\r\nxarray: None\r\nIPython: 6.0.0\r\nsphinx: None\r\npatsy: 0.4.1\r\ndateutil: 2.6.0\r\npytz: 2017.2\r\nblosc: None\r\nbottleneck: None\r\ntables: 3.4.2\r\nnumexpr: 2.6.2\r\nfeather: None\r\nmatplotlib: 2.0.2\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: 0.999999999\r\nsqlalchemy: 1.1.10\r\npymysql: None\r\npsycopg2: 2.7.1 (dt dec pq3 ext lo64)\r\njinja2: 2.9.6\r\ns3fs: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n
    \r\n", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/25541", "commit_html_url": null, "file_loc": "{'base_commit': '65c0441a41b2dcaeebb648274d30978419a8661a', 'files': [{'path': 'doc/source/whatsnew/v0.25.0.rst', 'status': 'modified', 'Loc': {'(None, None, 21)': {'add': [21]}}}, {'path': 'pandas/_libs/tslibs/strptime.pyx', 'status': 'modified', 'Loc': {'(None, None, 79)': {'add': [79]}, '(None, None, 171)': {'add': [171]}, '(None, None, 267)': {'add': [267]}, '(None, None, 513)': {'add': [513]}, '(None, None, 520)': {'add': [520]}, '(None, None, 521)': {'add': [521]}, '(None, None, 622)': {'add': [622]}, '(None, None, 57)': {'mod': [57]}, '(None, None, 178)': {'mod': [178]}, '(None, None, 271)': {'mod': [271, 272, 273, 274]}, '(None, None, 596)': {'mod': [596, 597]}, '(None, None, 600)': {'mod': [600]}}}, {'path': 'pandas/core/tools/datetimes.py', 'status': 'modified', 'Loc': {\"(None, 'to_datetime', 403)\": {'add': [457]}}}, {'path': 'pandas/tests/indexes/datetimes/test_tools.py', 'status': 'modified', 'Loc': {\"('TestToDatetime', None, 246)\": {'add': [246]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code/Doc"}, "loctype": {"code": ["pandas/core/tools/datetimes.py", "pandas/_libs/tslibs/strptime.pyx"], "doc": ["doc/source/whatsnew/v0.25.0.rst"], "test": ["pandas/tests/indexes/datetimes/test_tools.py"], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e", "iss_html_url": "https://github.com/pallets/flask/issues/1971", "iss_label": "", "title": "Implement RFC 7233", "body": "It would be great to support [RFC 7233 : Hypertext Transfer Protocol (HTTP/1.1): Range Requests](https://tools.ietf.org/html/rfc7233) for next major version, at least for non multipart/byteranges media type.\n\nI'm willing to implement this, so please share your thoughts about this.\n\nWhat must be done:\n- Modify `send_file` method to support Range Requests\n - Use existing `conditionnal` parameter to enable Range Requests support ?\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/2031", "commit_html_url": null, "file_loc": "{'base_commit': '01081dbe6cdfa3fc43d8e1fff708d4ed95e1be7e', 'files': [{'path': 'CHANGES', 'status': 'modified', 'Loc': {'(None, None, 20)': {'add': [20]}}}, {'path': 'flask/helpers.py', 'status': 'modified', 'Loc': {\"(None, 'send_file', 430)\": {'add': [448, 502], 'mod': [538, 544, 578]}, '(None, None, None)': {'mod': [28, 29]}}}, {'path': 'tests/test_helpers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}, \"('TestSendfile', None, 356)\": {'add': [464]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/helpers.py"], "doc": ["CHANGES"], "test": ["tests/test_helpers.py"], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "673e5af658cf029e82d87047dcb7ebee3d343d10", "iss_html_url": "https://github.com/pallets/flask/issues/2823", "iss_label": "", "title": "Flask complains a .env file exists when not using python-dotenv, even though that .env is a directory", "body": "I place my virtualenvs in a `.env` directory in my project directory. Flask 1.x sees this directory and thinks it might be a \"dotenv\" file (even though it is a directory).\r\n\r\n### Expected Behavior\r\n\r\n`flask` should ignore a `.env` directory when `python-dotenv` is not installed.\r\n\r\n### Actual Behavior\r\n\r\n`flask` says:\r\n\r\n> * Tip: There are .env files present. Do \"pip install python-dotenv\" to use them.\r\n\r\n### Environment\r\n\r\n* Python version: 3.6.5\r\n* Flask version: 1.0.2\r\n* Werkzeug version: 0.14.1\r\n", "code": null, "pr_html_url": "https://github.com/pallets/flask/pull/2827", "commit_html_url": null, "file_loc": "{'base_commit': '673e5af658cf029e82d87047dcb7ebee3d343d10', 'files': [{'path': 'flask/cli.py', 'status': 'modified', 'Loc': {\"(None, 'load_dotenv', 567)\": {'mod': [587]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/cli.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "8e589daaf2cec6a10262b8ff88801127f2fa14fd", "iss_html_url": "https://github.com/pallets/flask/issues/4220", "iss_label": "", "title": "`template_filter` decorator typing does not support custom filters with multiple arguments", "body": "`template_filter` decorator typing does not support custom filters that take in multiple arguments. Consider:\r\n\r\n```py\r\nfrom flask import Flask\r\n\r\n\r\napp = Flask(__name__)\r\n\r\n\r\n@app.template_filter('foo_bar')\r\ndef foo_bar_filter(foo, bar):\r\n return f'{foo} {bar}'\r\n```\r\n`mypy` will return the following error message:\r\n```\r\nerror: Argument 1 has incompatible type \"Callable[[Any, Any], Any]\"; expected \"Callable[[Any], str]\" [arg-type]\r\n```\r\nAs custom filters with multiple arguments are supported by Jinja (https://jinja.palletsprojects.com/en/3.0.x/api/#custom-filters), I think this typing error is a false positive.\r\n\r\nEnvironment:\r\n\r\n- Python version: 3.6.13\r\n- Flask version: 2.0.1\r\n- Mypy version: 0.812\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pallets/flask/commit/8e589daaf2cec6a10262b8ff88801127f2fa14fd", "file_loc": "{'base_commit': '8e589daaf2cec6a10262b8ff88801127f2fa14fd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 10)': {'add': [10]}}}, {'path': 'src/flask/typing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [43, 44, 45]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/flask/typing.py"], "doc": ["CHANGES.rst"], "test": [], "config": [], "asset": []}} +{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "a4df5010f49044eb1f1713057e8914e6a5a104b3", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1073", "iss_label": "false positive", "title": "producthunt.com false positive", "body": "\r\n\r\n## Checklist\r\n\r\n- [X] I'm reporting a website that is returning **false positive** results\r\n- [X] I've checked for similar site support requests including closed ones\r\n- [X] I've checked for pull requests attempting to fix this false positive\r\n- [X] I'm only reporting **one** site (create a seperate issue for each site)\r\n\r\n## Description\r\n\r\n\r\nhttps://www.producthunt.com/@adasaaakzzzzzzzzsdsdsdasdadadasqe22aasd\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/sherlock-project/sherlock/commit/a4df5010f49044eb1f1713057e8914e6a5a104b3", "file_loc": "{'base_commit': 'a4df5010f49044eb1f1713057e8914e6a5a104b3', 'files': [{'path': 'sherlock/resources/data.json', 'status': 'modified', 'Loc': {'(None, None, 1159)': {'mod': [1159]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sherlock/resources/data.json"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "d2803c0fb7d0ba9361dcba8eb9bcebbf2f774958", "iss_html_url": "https://github.com/keras-team/keras/issues/11023", "iss_label": "", "title": "Cannot load_model", "body": "Thank you!\r\n\r\n- [ ] Check that you are up-to-date with the master branch of Keras. You can update with:\r\npip install git+git://github.com/keras-team/keras.git --upgrade --no-deps\r\n\r\n- [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found [here](https://www.tensorflow.org/get_started/os_setup).\r\n\r\n- [ ] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\r\npip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\r\n\r\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\r\n\r\nI am using Google Colab to train a CNN and then save the entire model to a `.h5` file. The code is available here: [CNN-Colab](https://gist.github.com/abhisheksoni27/184c49ca703eb124e1b17eb8dd8af518)\r\n\r\nThe model gets saved but when I later try to load it back, I get the following error:\r\n\r\n```\r\nTypeError: float() argument must be a string or a number, not 'dict'\r\n```\r\n\r\nThe entire Output log is here: [CNN - Colab - Error](https://gist.github.com/abhisheksoni27/732bec240629d2dd721e80130cb2956b)\r\n", "code": null, "pr_html_url": "https://github.com/keras-team/keras/pull/10727", "commit_html_url": null, "file_loc": "{'base_commit': 'd2803c0fb7d0ba9361dcba8eb9bcebbf2f774958', 'files': [{'path': 'keras/engine/saving.py', 'status': 'modified', 'Loc': {\"(None, 'get_json_type', 61)\": {'mod': [82, 83]}}}, {'path': 'tests/test_model_saving.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14, 643]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/engine/saving.py"], "doc": [], "test": ["tests/test_model_saving.py"], "config": [], "asset": []}} +{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "b28ece0f34e54d1c980e31223451f3b2f0f20ff9", "iss_html_url": "https://github.com/nvbn/thefuck/issues/1021", "iss_label": "", "title": "Git checkout should provide multiple corrections", "body": "When correcting git checkout, the default is to use the 'closest branch'. We have a lot of branches with similar names, but quite often, what I actually meant to do was supply the '-b' flag.\r\n\r\nCan the git checkout rule be updated to return all of the possible options, rather than trying to guess, based on some arbitrary priority?\r\n", "code": null, "pr_html_url": "https://github.com/nvbn/thefuck/pull/1022", "commit_html_url": null, "file_loc": "{'base_commit': 'b28ece0f34e54d1c980e31223451f3b2f0f20ff9', 'files': [{'path': 'tests/rules/test_git_checkout.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [59, 62, 66, 70]}}}, {'path': 'thefuck/rules/git_checkout.py', 'status': 'modified', 'Loc': {\"(None, 'get_new_command', 31)\": {'add': [36], 'mod': [38, 39, 40, 41, 42, 43]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/git_checkout.py"], "doc": [], "test": ["tests/rules/test_git_checkout.py"], "config": [], "asset": []}} +{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "2d81166213c403dce5c04d1fb73ba5d3e57d6676", "iss_html_url": "https://github.com/nvbn/thefuck/issues/660", "iss_label": "", "title": "Slow execution time", "body": "The command output is very slow on macOS w/ fish shell. Reproduction rate is ~80% for me.\r\n\r\nVersion: The Fuck 3.18 using Python 2.7.10\r\nShell: fish, version 2.6.0\r\nOS: macOS 10.12.5\r\nDebug Output:\r\n```\r\n\u276f fuck 333ms\r\nDEBUG: Run with settings: {'alter_history': True,\r\n 'debug': True,\r\n 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},\r\n 'exclude_rules': [],\r\n 'history_limit': None,\r\n 'no_colors': False,\r\n 'priority': {},\r\n 'repeat': False,\r\n 'require_confirmation': True,\r\n 'rules': [],\r\n 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],\r\n 'user_dir': PosixPath('/Users/sbennett/.config/thefuck'),\r\n 'wait_command': 3,\r\n 'wait_slow_command': 15}\r\nDEBUG: Execution timed out!\r\nDEBUG: Call: fish -ic \"fuck\"; with env: {'PYTHONIOENCODING': 'utf-8', 'VERSIONER_PYTHON_PREFER_32_BIT': 'no', 'TERM_PROGRAM_VERSION': '3.0.15', 'LOGNAME': 'sbennett', 'USER': 'sbennett', 'HOME': '/Users/sbennett', 'PATH': '/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin', 'TERM_PROGRAM': 'iTerm.app', 'LANG': 'C', 'THEFUCK_DEBUG': 'true', 'TERM': 'xterm-256color', 'Apple_PubSub_Socket_Render': '/private/tmp/com.apple.launchd.1eq3gwtm7Y/Render', 'COLORFGBG': '15;0', 'VERSIONER_PYTHON_VERSION': '2.7', 'SHLVL': '2', 'XPC_FLAGS': '0x0', 'ITERM_SESSION_ID': 'w1t1p0:E781FA41-C385-4CCE-A9E0-EBF190B3D246', 'TERM_SESSION_ID': 'w1t1p0:E781FA41-C385-4CCE-A9E0-EBF190B3D246', 'SSH_AUTH_SOCK': '/private/tmp/com.apple.launchd.leMomVKppy/Listeners', 'TF_ALIAS': 'fuck', 'XPC_SERVICE_NAME': '0', 'SHELL': '/usr/local/bin/fish', 'ITERM_PROFILE': 'Default', 'LC_ALL': 'C', 'TMPDIR': '/var/folders/0s/c0f2hl495352w24847p7ybwm35h1r_/T/', 'GIT_TRACE': '1', '__CF_USER_TEXT_ENCODING': '0x658070A:0x0:0x0', 'PWD': '/Users/sbennett'}; is slow: took: 0:00:03.018166\r\nDEBUG: Importing rule: ag_literal; took: 0:00:00.000511\r\nDEBUG: Importing rule: apt_get; took: 0:00:00.000571\r\nDEBUG: Importing rule: apt_get_search; took: 0:00:00.000224\r\nDEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000715\r\nDEBUG: Importing rule: aws_cli; took: 0:00:00.000235\r\nDEBUG: Importing rule: brew_install; took: 0:00:00.000279\r\nDEBUG: Importing rule: brew_link; took: 0:00:00.000217\r\nDEBUG: Importing rule: brew_uninstall; took: 0:00:00.000276\r\nDEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000105\r\nDEBUG: Importing rule: brew_update_formula; took: 0:00:00.000222\r\nDEBUG: Importing rule: brew_upgrade; took: 0:00:00.000061\r\nDEBUG: Importing rule: cargo; took: 0:00:00.000049\r\nDEBUG: Importing rule: cargo_no_command; took: 0:00:00.000223\r\nDEBUG: Importing rule: cd_correction; took: 0:00:00.000950\r\nDEBUG: Importing rule: cd_mkdir; took: 0:00:00.000342\r\nDEBUG: Importing rule: cd_parent; took: 0:00:00.000050\r\nDEBUG: Importing rule: chmod_x; took: 0:00:00.000058\r\nDEBUG: Importing rule: composer_not_command; took: 0:00:00.001520\r\nDEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000677\r\nDEBUG: Importing rule: cpp11; took: 0:00:00.000324\r\nDEBUG: Importing rule: dirty_untar; took: 0:00:00.001812\r\nDEBUG: Importing rule: dirty_unzip; took: 0:00:00.000257\r\nDEBUG: Importing rule: django_south_ghost; took: 0:00:00.000066\r\nDEBUG: Importing rule: django_south_merge; took: 0:00:00.000113\r\nDEBUG: Importing rule: docker_not_command; took: 0:00:00.000528\r\nDEBUG: Importing rule: dry; took: 0:00:00.000068\r\nDEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000396\r\nDEBUG: Importing rule: fix_alt_space; took: 0:00:00.000337\r\nDEBUG: Importing rule: fix_file; took: 0:00:00.003110\r\nDEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000506\r\nDEBUG: Importing rule: git_add; took: 0:00:00.000520\r\nDEBUG: Importing rule: git_add_force; took: 0:00:00.000252\r\nDEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000249\r\nDEBUG: Importing rule: git_branch_delete; took: 0:00:00.000232\r\nDEBUG: Importing rule: git_branch_exists; took: 0:00:00.000309\r\nDEBUG: Importing rule: git_branch_list; took: 0:00:00.000236\r\nDEBUG: Importing rule: git_checkout; took: 0:00:00.000254\r\nDEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000238\r\nDEBUG: Importing rule: git_diff_staged; took: 0:00:00.000228\r\nDEBUG: Importing rule: git_fix_stash; took: 0:00:00.000252\r\nDEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000231\r\nDEBUG: Importing rule: git_help_aliased; took: 0:00:00.000231\r\nDEBUG: Importing rule: git_not_command; took: 0:00:00.000363\r\nDEBUG: Importing rule: git_pull; took: 0:00:00.000242\r\nDEBUG: Importing rule: git_pull_clone; took: 0:00:00.000239\r\nDEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000244\r\nDEBUG: Importing rule: git_push; took: 0:00:00.000246\r\nDEBUG: Importing rule: git_push_force; took: 0:00:00.000238\r\nDEBUG: Importing rule: git_push_pull; took: 0:00:00.000221\r\nDEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000343\r\nDEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000250\r\nDEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000164\r\nDEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000159\r\nDEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000241\r\nDEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000493\r\nDEBUG: Importing rule: git_rm_staged; took: 0:00:00.000347\r\nDEBUG: Importing rule: git_stash; took: 0:00:00.000286\r\nDEBUG: Importing rule: git_stash_pop; took: 0:00:00.000281\r\nDEBUG: Importing rule: git_tag_force; took: 0:00:00.000268\r\nDEBUG: Importing rule: git_two_dashes; took: 0:00:00.000239\r\nDEBUG: Importing rule: go_run; took: 0:00:00.000217\r\nDEBUG: Importing rule: gradle_no_task; took: 0:00:00.000566\r\nDEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000227\r\nDEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000235\r\nDEBUG: Importing rule: grep_recursive; took: 0:00:00.000222\r\nDEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000479\r\nDEBUG: Importing rule: gulp_not_task; took: 0:00:00.000227\r\nDEBUG: Importing rule: has_exists_script; took: 0:00:00.000240\r\nDEBUG: Importing rule: heroku_not_command; took: 0:00:00.000310\r\nDEBUG: Importing rule: history; took: 0:00:00.000067\r\nDEBUG: Importing rule: hostscli; took: 0:00:00.000383\r\nDEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000296\r\nDEBUG: Importing rule: java; took: 0:00:00.000226\r\nDEBUG: Importing rule: javac; took: 0:00:00.000216\r\nDEBUG: Importing rule: lein_not_task; took: 0:00:00.000370\r\nDEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000237\r\nDEBUG: Importing rule: ln_s_order; took: 0:00:00.000241\r\nDEBUG: Importing rule: ls_all; took: 0:00:00.000208\r\nDEBUG: Importing rule: ls_lah; took: 0:00:00.000347\r\nDEBUG: Importing rule: man; took: 0:00:00.000241\r\nDEBUG: Importing rule: man_no_space; took: 0:00:00.000062\r\nDEBUG: Importing rule: mercurial; took: 0:00:00.000234\r\nDEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000085\r\nDEBUG: Importing rule: mkdir_p; took: 0:00:00.000252\r\nDEBUG: Importing rule: mvn_no_command; took: 0:00:00.000213\r\nDEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000260\r\nDEBUG: Importing rule: no_command; took: 0:00:00.000261\r\nDEBUG: Importing rule: no_such_file; took: 0:00:00.000066\r\nDEBUG: Importing rule: npm_missing_script; took: 0:00:00.000593\r\nDEBUG: Importing rule: npm_run_script; took: 0:00:00.000235\r\nDEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000378\r\nDEBUG: Importing rule: open; took: 0:00:00.000605\r\nDEBUG: Importing rule: pacman; took: 0:00:00.000366\r\nDEBUG: Importing rule: pacman_not_found; took: 0:00:00.000111\r\nDEBUG: Importing rule: path_from_history; took: 0:00:00.000099\r\nDEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000315\r\nDEBUG: Importing rule: port_already_in_use; took: 0:00:00.000183\r\nDEBUG: Importing rule: python_command; took: 0:00:00.000261\r\nDEBUG: Importing rule: python_execute; took: 0:00:00.000232\r\nDEBUG: Importing rule: quotation_marks; took: 0:00:00.000052\r\nDEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000224\r\nDEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000051\r\nDEBUG: Importing rule: rm_dir; took: 0:00:00.000242\r\nDEBUG: Importing rule: rm_root; took: 0:00:00.000235\r\nDEBUG: Importing rule: scm_correction; took: 0:00:00.000254\r\nDEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000222\r\nDEBUG: Importing rule: sl_ls; took: 0:00:00.000052\r\nDEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000239\r\nDEBUG: Importing rule: sudo; took: 0:00:00.000059\r\nDEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000231\r\nDEBUG: Importing rule: switch_lang; took: 0:00:00.000091\r\nDEBUG: Importing rule: systemctl; took: 0:00:00.000378\r\nDEBUG: Importing rule: test.py; took: 0:00:00.000051\r\nDEBUG: Importing rule: tmux; took: 0:00:00.000212\r\nDEBUG: Importing rule: touch; took: 0:00:00.000223\r\nDEBUG: Importing rule: tsuru_login; took: 0:00:00.000281\r\nDEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000223\r\nDEBUG: Importing rule: unknown_command; took: 0:00:00.000062\r\nDEBUG: Importing rule: vagrant_up; took: 0:00:00.000308\r\nDEBUG: Importing rule: whois; took: 0:00:00.000282\r\nDEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000309\r\nDEBUG: Importing rule: yarn_alias; took: 0:00:00.000219\r\nDEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000494\r\nDEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000357\r\nDEBUG: Importing rule: yarn_help; took: 0:00:00.000232\r\nDEBUG: Trying rule: dirty_unzip; took: 0:00:00.000568\r\nNo fucks given\r\nDEBUG: Total took: 0:00:03.282835\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/2d81166213c403dce5c04d1fb73ba5d3e57d6676", "file_loc": "{'base_commit': '2d81166213c403dce5c04d1fb73ba5d3e57d6676', 'files': [{'path': 'tests/shells/test_fish.py', 'status': 'modified', 'Loc': {\"('TestFish', 'test_get_overridden_aliases', 29)\": {'mod': [31, 32]}}}, {'path': 'thefuck/shells/fish.py', 'status': 'modified', 'Loc': {\"('Fish', '_get_overridden_aliases', 40)\": {'mod': [46]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/shells/fish.py"], "doc": [], "test": ["tests/shells/test_fish.py"], "config": [], "asset": []}} +{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "6da0bc557f0fd94ea1397d3a7f508be896cc98d8", "iss_html_url": "https://github.com/nvbn/thefuck/issues/1120", "iss_label": "", "title": "Trying rule missing_space_before_subcommand taking so long", "body": "\r\n\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.29 using Python 3.8.2 and ZSH 5.8\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n ubuntu 20.04 on wsl2\r\n\r\nHow to reproduce the bug:\r\n\r\n env THEFUCK_DEBUG=true thefuck test\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\n DEBUG: Trying rule: missing_space_before_subcommand; took: 0:00:08.341279\r\n No fucks given\r\n\r\nAnything else you think is relevant:\r\n\r\nI have no idea why this taking so long. anyone else having this problem?\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/KiaraGrouwstra/thefuck/commit/6da0bc557f0fd94ea1397d3a7f508be896cc98d8", "file_loc": "{'base_commit': '6da0bc557f0fd94ea1397d3a7f508be896cc98d8', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 436)': {'add': [436]}, '(None, None, 468)': {'add': [468]}}}, {'path': 'tests/test_conf.py', 'status': 'modified', 'Loc': {\"('TestSettingsFromEnv', 'test_from_env', 48)\": {'add': [67], 'mod': [57]}}}, {'path': 'tests/test_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [96]}}}, {'path': 'thefuck/conf.py', 'status': 'modified', 'Loc': {\"('Settings', '_val_from_env', 91)\": {'mod': [104]}}}, {'path': 'thefuck/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [46, 61]}}}, {'path': 'thefuck/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [106]}, \"(None, 'get_all_executables', 112)\": {'add': [121]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/conf.py", "thefuck/utils.py", "thefuck/const.py"], "doc": ["README.md"], "test": ["tests/test_conf.py", "tests/test_utils.py"], "config": [], "asset": []}} +{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "a84671dd3b7505d4d73f11ee9c7d057429542e24", "iss_html_url": "https://github.com/nvbn/thefuck/issues/20", "iss_label": "", "title": "Some Unicode error in Ubuntu 14.10", "body": "``` bash\n$ apt-get update\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u043e\u0442\u043a\u0440\u044b\u0442\u044c \u0444\u0430\u0439\u043b \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0438 /var/lib/apt/lists/lock - open (13: \u041e\u0442\u043a\u0430\u0437\u0430\u043d\u043e \u0432 \u0434\u043e\u0441\u0442\u0443\u043f\u0435)\nE: \u041d\u0435\u0432\u043e\u0437\u043c\u043e\u0436\u043d\u043e \u0437\u0430\u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u0430\u0442\u044c \u043a\u0430\u0442\u0430\u043b\u043e\u0433 /var/lib/apt/lists/\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u043e\u0442\u043a\u0440\u044b\u0442\u044c \u0444\u0430\u0439\u043b \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0438 /var/lib/dpkg/lock - open (13: \u041e\u0442\u043a\u0430\u0437\u0430\u043d\u043e \u0432 \u0434\u043e\u0441\u0442\u0443\u043f\u0435)\nE: \u041d\u0435 \u0443\u0434\u0430\u043b\u043e\u0441\u044c \u0432\u044b\u043f\u043e\u043b\u043d\u0438\u0442\u044c \u0431\u043b\u043e\u043a\u0438\u0440\u043e\u0432\u043a\u0443 \u0443\u043f\u0440\u0430\u0432\u043b\u044f\u044e\u0449\u0435\u0433\u043e \u043a\u0430\u0442\u0430\u043b\u043e\u0433\u0430 (/var/lib/dpkg/); \u0443 \u0432\u0430\u0441 \u0435\u0441\u0442\u044c \u043f\u0440\u0430\u0432\u0430 \u0441\u0443\u043f\u0435\u0440\u043f\u043e\u043b\u044c\u0437\u043e\u0432\u0430\u0442\u0435\u043b\u044f?\n$ fuck\nTraceback (most recent call last):\n File \"/usr/local/bin/thefuck\", line 9, in \n load_entry_point('thefuck==1.7', 'console_scripts', 'thefuck')()\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/main.py\", line 91, in main\n matched_rule = get_matched_rule(command, rules, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/main.py\", line 67, in get_matched_rule\n if rule.match(command, settings):\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/utils.py\", line 41, in wrapper\n return fn(command, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/rules/no_command.py\", line 19, in match\n output = _get_output(command, settings)\n File \"/usr/local/lib/python2.7/dist-packages/thefuck/rules/no_command.py\", line 13, in _get_output\n return result.stderr.read().decode()\nUnicodeDecodeError: 'ascii' codec can't decode byte 0xd0 in position 0: ordinal not in range(128)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/a84671dd3b7505d4d73f11ee9c7d057429542e24", "file_loc": "{'base_commit': 'a84671dd3b7505d4d73f11ee9c7d057429542e24', 'files': [{'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'thefuck/rules/no_command.py', 'status': 'modified', 'Loc': {\"(None, '_get_output', 9)\": {'mod': [13]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/no_command.py", "setup.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "622298549172754afff07a8ea1f55358062e17a7", "iss_html_url": "https://github.com/nvbn/thefuck/issues/330", "iss_label": "", "title": "Add command options (--version, --help, --update/--upgrade)", "body": "And perhaps a manpage too, even if it only says \"Please use fuck --help for documentation\"\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/622298549172754afff07a8ea1f55358062e17a7", "file_loc": "{'base_commit': '622298549172754afff07a8ea1f55358062e17a7', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 110)': {'mod': [110]}, '(None, None, 112)': {'mod': [112]}}}, {'path': 'thefuck/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 3], 'mod': [83, 99, 100]}, \"(None, 'print_alias', 100)\": {'add': [101]}, \"(None, 'fix_command', 86)\": {'mod': [97]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/main.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "284d49da8d0ab3252b5426423b608033d39c2669", "iss_html_url": "https://github.com/nvbn/thefuck/issues/786", "iss_label": "next release", "title": "\"TypeError: 'module' object is not callable\" On any invocation of thefuck", "body": "\r\n\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python 3.5.0`):\r\n\r\n The Fuck 3.25 using Python 3.6.4+\r\n\r\nYour shell and its version (`bash`, `zsh`, *Windows PowerShell*, etc.):\r\n\r\n GNU bash, version 4.4.18(1)-release (x86_64-pc-linux-gnu)\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\n Ubuntu 18.04, Bionic Beaver\r\n\r\nHow to reproduce the bug:\r\n\r\n Execute any bad command (I tested with `cd..` and `apt install whatever`. Then enter `fuck`.\r\n\r\nThe output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):\r\n\r\n```\r\nDEBUG: Run with settings: {'alter_history': True,\r\n 'debug': True,\r\n 'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},\r\n 'exclude_rules': [],\r\n 'history_limit': None,\r\n 'instant_mode': False,\r\n 'no_colors': False,\r\n 'priority': {},\r\n 'repeat': False,\r\n 'require_confirmation': True,\r\n 'rules': [],\r\n 'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],\r\n 'user_dir': PosixPath('/home/thomasokeeffe/.config/thefuck'),\r\n 'wait_command': 3,\r\n 'wait_slow_command': 15}\r\nDEBUG: Received output: \r\nDEBUG: Call: export THEFUCK_DEBUG=true; with env: {'CLUTTER_IM_MODULE': 'xim', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_MENU_PREFIX': 'gnome-', 'LANG': 'C', 'GDM_LANG': 'en_US', 'MANAGERPID': '1425', 'DISPLAY': ':0', 'INVOCATION_ID': '09b52cf5b26f4acf8d4fcf48e96663bb', 'UNITY_DEFAULT_PROFILE': 'unity', 'COMPIZ_CONFIG_PROFILE': 'ubuntu', 'GTK2_MODULES': 'overlay-scrollbar', 'DOOMWADDIR': '/opt/doom', 'GTK_CSD': '0', 'COLORTERM': 'truecolor', 'TF_SHELL_ALIASES': 'alias alert=\\'notify-send --urgency=low -i \"$([ $? = 0 ] && echo terminal || echo error)\" \"$(history|tail -n1|sed -e \\'\\\\\\'\\'s/^\\\\s*[0-9]\\\\+\\\\s*//;s/[;&|]\\\\s*alert$//\\'\\\\\\'\\')\"\\'\\nalias dfhack=\\'~/df_linux/dfhack\\'\\nalias dwarff=\\'/home/thomasokeeffe/df_linux/df\\'\\nalias egrep=\\'egrep --color=auto\\'\\nalias fgrep=\\'fgrep --color=auto\\'\\nalias grep=\\'grep --color=auto\\'\\nalias l=\\'ls -CF\\'\\nalias la=\\'ls -A\\'\\nalias ll=\\'ls -alF\\'\\nalias ls=\\'ls --color=auto\\'\\nalias pip=\\'pip3\\'\\nalias python=\\'python3\\'', 'JAVA_HOME': '/usr/lib/jvm/java-8-oracle/', 'J2SDKDIR': '/usr/lib/jvm/java-9-oracle', 'PYTHONIOENCODING': 'utf-8', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'MANDATORY_PATH': '/usr/share/gconf/unity.mandatory.path', 'XDG_GREETER_DATA_DIR': '/var/lib/lightdm-data/thomasokeeffe', 'DERBY_HOME': '/usr/lib/jvm/java-9-oracle/db', 'USER': 'thomasokeeffe', 'DESKTOP_SESSION': 'unity', 'QT4_IM_MODULE': 'xim', 'TEXTDOMAINDIR': '/usr/share/locale/', 'DEFAULTS_PATH': '/usr/share/gconf/unity.default.path', 'PWD': '/home/thomasokeeffe', 'HOME': '/home/thomasokeeffe', 'JOURNAL_STREAM': '9:28556', 'TEXTDOMAIN': 'im-config', 'J2REDIR': '/usr/lib/jvm/java-9-oracle', 'QT_ACCESSIBILITY': '1', 'XDG_SESSION_TYPE': 'x11', 'COMPIZ_BIN_PATH': '/usr/bin/', 'XDG_DATA_DIRS': '/usr/share/unity:/usr/share/unity:/usr/local/share:/usr/share:/var/lib/snapd/desktop:/var/lib/snapd/desktop', 'XDG_SESSION_DESKTOP': 'unity', 'WINEDEBUG': '-all', 'SSH_AGENT_LAUNCHER': 'gnome-keyring', 'GTK_MODULES': 'gail:atk-bridge:unity-gtk-module', 'GNOME_SESSION_XDG_SESSION_PATH': '/org/freedesktop/DisplayManager/Session0', 'TERM': 'xterm-256color', 'VTE_VERSION': '5002', 'SHELL': '/bin/bash', 'XDG_SEAT_PATH': '/org/freedesktop/DisplayManager/Seat0', 'QT_IM_MODULE': 'ibus', 'XMODIFIERS': '@im=ibus', 'IM_CONFIG_PHASE': '2', 'XDG_CURRENT_DESKTOP': 'Unity:Unity7:ubuntu', 'GPG_AGENT_INFO': '/home/thomasokeeffe/.gnupg/S.gpg-agent:0:1:', 'TF_ALIAS': 'fuck', 'UNITY_HAS_3D_SUPPORT': 'true', 'SHLVL': '2', 'LANGUAGE': 'en_US', 'WINDOWID': '67108870', 'GDMSESSION': 'unity', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'LOGNAME': 'thomasokeeffe', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'XDG_RUNTIME_DIR': '/run/user/1000', 'XAUTHORITY': '/home/thomasokeeffe/.Xauthority', 'TF_HISTORY': '\\t python\\n\\t fuck\\n\\t source ~/.bashrc\\n\\t fuck\\n\\t apt install whatever\\n\\t fuck\\n\\t cd..\\n\\t fuck\\n\\t fuck --version\\n\\t export THEFUCK_DEBUG=true', 'XDG_SESSION_PATH': '/org/freedesktop/DisplayManager/Session0', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-unity:/etc/xdg/xdg-unity:/etc/xdg', 'PATH': '/usr/bin/ski:/home/thomasokeeffe/.local/bin:/opt/doom:/usr/bin/python3:/usr/bin/ski:/home/thomasokeeffe/.local/bin:/opt/doom:/usr/bin/python3:/home/thomasokeeffe/.local/share/umake/bin:/home/thomasokeeffe/bin:/home/thomasokeeffe/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/lib/jvm/java-9-oracle/bin:/usr/lib/jvm/java-9-oracle/db/bin', 'THEFUCK_DEBUG': 'true', 'LD_PRELOAD': 'libgtk3-nocsd.so.0', 'SESSION_MANAGER': 'local/Wirecat:@/tmp/.ICE-unix/1738,unix/Wirecat:/tmp/.ICE-unix/1738', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'GTK_IM_MODULE': 'ibus', '_': '/home/thomasokeeffe/.local/bin/thefuck', 'LC_ALL': 'C', 'GIT_TRACE': '1'}; is slow: took: 0:00:00.001356\r\nDEBUG: Importing rule: ag_literal; took: 0:00:00.000609\r\nDEBUG: Importing rule: apt_get; took: 0:00:00.001838\r\nDEBUG: Total took: 0:00:00.028332\r\nTraceback (most recent call last):\r\n File \"/home/thomasokeeffe/.local/bin/thefuck\", line 11, in \r\n sys.exit(main())\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/entrypoints/main.py\", line 25, in main\r\n fix_command(known_args)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/entrypoints/fix_command.py\", line 41, in fix_command\r\n corrected_commands = get_corrected_commands(command)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 89, in get_corrected_commands\r\n corrected for rule in get_rules()\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 49, in get_rules\r\n key=lambda rule: rule.priority)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/corrector.py\", line 17, in get_loaded_rules\r\n rule = Rule.from_path(path)\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/types.py\", line 140, in from_path\r\n rule_module = load_source(name, str(path))\r\n File \"/usr/lib/python3.6/imp.py\", line 172, in load_source\r\n module = _load(spec)\r\n File \"\", line 696, in _load\r\n File \"\", line 677, in _load_unlocked\r\n File \"\", line 678, in exec_module\r\n File \"\", line 219, in _call_with_frames_removed\r\n File \"/home/thomasokeeffe/.local/lib/python3.6/site-packages/thefuck/rules/apt_get.py\", line 8, in \r\n command_not_found = CommandNotFound()\r\nTypeError: 'module' object is not callable\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/nvbn/thefuck/commit/fb39d0bbd349e916ae12a77f04efd151dd046e6b\n\nhttps://github.com/nvbn/thefuck/commit/284d49da8d0ab3252b5426423b608033d39c2669", "file_loc": "{'base_commit': '284d49da8d0ab3252b5426423b608033d39c2669', 'files': [{'path': 'tests/rules/test_apt_get.py', 'status': 'modified', 'Loc': {\"(None, 'test_match', 13)\": {'mod': [15, 16, 17]}, \"(None, 'test_not_match', 30)\": {'mod': [33, 34, 35]}, \"(None, 'test_get_new_command', 49)\": {'mod': [52, 53, 54]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": ["tests/rules/test_apt_get.py"], "config": [], "asset": []}} +{"organization": "home-assistant", "repo_name": "core", "base_commit": "2b5e7c26111e447c2714284151c2e7555abd11e4", "iss_html_url": "https://github.com/home-assistant/core/issues/27175", "iss_label": "integration: google_assistant", "title": "Google assistant: something went wrong when using alarm", "body": "\r\n\r\n**Home Assistant release with the issue:**\r\n0.100.0b0\r\n\r\n\r\n\r\n\r\n**Last working Home Assistant release (if known):**\r\n\r\n\r\n**Operating environment (Hass.io/Docker/Windows/etc.):**\r\nhassio\r\n\r\n**Integration:**\r\n\r\nnabu casa cloud\r\ngoogle assistant\r\nenvisalink\r\n\r\n**Description of problem:**\r\nUsing the google assistant to arm home/arm away/disarm causes the google assistant to indicate that \"something went wrong\" although it actually performed the action.\r\nI am using the envisalink component which allows you to specify the code so that it is sent with each service call. I tried with/without the code configuration and it made no difference. \r\n\r\n\r\n**Problem-relevant `configuration.yaml` entries and (fill out even if it seems unimportant):**\r\n```yaml\r\n\r\n```\r\n\r\n**Traceback (if applicable):**\r\n```\r\n\r\n```\r\n\r\n**Additional information:**\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/36942", "commit_html_url": null, "file_loc": "{'base_commit': '2b5e7c26111e447c2714284151c2e7555abd11e4', 'files': [{'path': 'homeassistant/components/google_assistant/trait.py', 'status': 'modified', 'Loc': {\"('ArmDisArmTrait', None, 974)\": {'add': [990, 1000]}, \"('ArmDisArmTrait', 'sync_attributes', 1001)\": {'mod': [1005]}, \"('ArmDisArmTrait', 'execute', 1031)\": {'mod': [1034, 1038]}}}, {'path': 'tests/components/google_assistant/test_trait.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1033]}, \"(None, 'test_arm_disarm_arm_away', 865)\": {'mod': [876, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913]}, \"(None, 'test_arm_disarm_disarm', 1035)\": {'mod': [1046, 1053, 1054, 1055, 1056, 1057, 1058, 1059, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1067, 1068, 1069, 1070]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/google_assistant/trait.py"], "doc": [], "test": ["tests/components/google_assistant/test_trait.py"], "config": [], "asset": []}} +{"organization": "home-assistant", "repo_name": "core", "base_commit": "fb7fb0ea78ee335cd23f3647223a675718ccf048", "iss_html_url": "https://github.com/home-assistant/core/issues/40316", "iss_label": "integration: knx", "title": "KNX problem with 0.115.0 and 0.115.1", "body": "## The problem\r\nKNX integration has changed behavior and don't work fine:\r\n1) it is possible to read the status of a scene only if it is launched from the KNX bus but not if it is launched from the HA\r\n2) KNX climate don't read operation_mode_state_address correctly, when the operation mode is changed it reads the correct state then it is changed to \"standby\"\r\n\r\n## Environment\r\nHome Assistant 0.115.1\r\nFrontend: 20200917.1 - latest\r\nRaspberry 3\r\narch | armv7l\r\nchassis | embedded\r\ndev | false\r\ndocker | true\r\ndocker_version | 19.03.11\r\nhassio | true\r\nhost_os | HassOS 4.13\r\ninstallation_type | Home Assistant OS\r\nos_name | Linux\r\nos_version | 4.19.127-v7\r\npython_version | 3.8.5\r\nsupervisor | 245\r\ntimezone | Europe/Rome\r\nversion | 0.115.1\r\nvirtualenv | false\r\n\r\n- Home Assistant Core release with the issue: 0.115.1\r\n- Last working Home Assistant Core release (if known): 0.113.3\r\n- Operating environment (OS/Container/Supervised/Core): 4.12 \r\n- Integration causing this issue: KNX\r\n- Link to integration documentation on our website: https://www.home-assistant.io/integrations/knx/\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/40472", "commit_html_url": null, "file_loc": "{'base_commit': 'fb7fb0ea78ee335cd23f3647223a675718ccf048', 'files': [{'path': 'homeassistant/components/knx/manifest.json', 'status': 'modified', 'Loc': {'(None, None, 5)': {'mod': [5]}}}, {'path': 'requirements_all.txt', 'status': 'modified', 'Loc': {'(None, None, 2268)': {'mod': [2268]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Doc\nJson"}, "loctype": {"code": ["homeassistant/components/knx/manifest.json"], "doc": [], "test": [], "config": ["requirements_all.txt"], "asset": []}} +{"organization": "home-assistant", "repo_name": "core", "base_commit": "faedba04079d2c999a479118b5189ef4c0bff060", "iss_html_url": "https://github.com/home-assistant/core/issues/77928", "iss_label": "integration: velux\nstale", "title": "Somfy blind motors cannot be assigned to a room", "body": "### The problem\n\nSomfy motors will return `None` as serial number via the Velux KLF-200:\r\n[Handle devices without serial numbers.](https://github.com/Julius2342/pyvlx/pull/42/commits/d409d66db8732553e928f5dd9d00d458ba638dea)\r\n\r\nThis serial is usesd as unique id here:\r\n[core/homeassistant/components/velux/__init__.py#L114](https://github.com/home-assistant/core/blob/dev/homeassistant/components/velux/__init__.py#L114)\r\n\r\nCould it be reasonable to return the node name instead of `None`?\r\n```python\r\n if self.node.serial_number:\r\n return self.node.serial_number\r\n elif self.node.name:\r\n return self.node.name\r\n else:\r\n return \"velux_#\" + str(self.node.node_id)\r\n```\n\n### What version of Home Assistant Core has the issue?\n\n2022.8.7\n\n### What was the last working version of Home Assistant Core?\n\n_No response_\n\n### What type of installation are you running?\n\nHome Assistant OS\n\n### Integration causing the issue\n\nVelux\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/velux/\n\n### Diagnostics information\n\n_No response_\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\nRelated issues:\r\n[66262](https://github.com/home-assistant/core/issues/66262)\r\n[35935](https://github.com/home-assistant/core/issues/35935)\r\n[74009](https://github.com/home-assistant/core/issues/74009)\r\n", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/117508", "commit_html_url": null, "file_loc": "{'base_commit': 'faedba04079d2c999a479118b5189ef4c0bff060', 'files': [{'path': 'homeassistant/components/velux/__init__.py', 'status': 'modified', 'Loc': {\"('VeluxEntity', None, 106)\": {'mod': [111]}, \"('VeluxEntity', '__init__', 111)\": {'mod': [114]}}}, {'path': 'homeassistant/components/velux/cover.py', 'status': 'modified', 'Loc': {\"(None, 'async_setup_entry', 26)\": {'mod': [32]}, \"('VeluxCover', None, 38)\": {'mod': [44]}, \"('VeluxCover', '__init__', 44)\": {'mod': [46]}}}, {'path': 'homeassistant/components/velux/light.py', 'status': 'modified', 'Loc': {\"(None, 'async_setup_entry', 19)\": {'mod': [26]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/velux/light.py", "homeassistant/components/velux/cover.py", "homeassistant/components/velux/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "home-assistant", "repo_name": "core", "base_commit": "551a584ca69771804b6f094eceb67dcb25a2f627", "iss_html_url": "https://github.com/home-assistant/core/issues/68620", "iss_label": "needs-more-information\nintegration: overkiz", "title": "Polling interval for stateless (e.g. Somfy (Oceania)) is not applied in Overkiz", "body": "### The problem\n\nEvery day I get a \"Gateway ID\" error in Overkiz error that reads as below. Same problem as [#66606](https://github.com/home-assistant/core/issues/66606) \r\n\r\n\"Translation Error: The intl string context variable \"gateway id\" was not provided to the string \"Gateway: {gateway id}\" Overkiz (by Somfy)\". \r\n\r\nWhen I click \"Reconfigure\" and reenter my password, the problem is corrected. But then it reoccurs in the next day or so.\r\n\r\nLooking at the log, it seems like there's some really aggressive polling going on? \r\n\r\n\n\n### What version of Home Assistant Core has the issue?\n\ncore-2022.3.5\n\n### What was the last working version of Home Assistant Core?\n\n_No response_\n\n### What type of installation are you running?\n\nHome Assistant Supervised\n\n### Integration causing the issue\n\nOverkiz (by Somfy)\n\n### Link to integration documentation on our website\n\nhttps://www.home-assistant.io/integrations/overkiz\n\n### Diagnostics information\n\n[config_entry-overkiz-0bf20335f9aeaa86644cb071861f6ef1.json.txt](https://github.com/home-assistant/core/files/8341651/config_entry-overkiz-0bf20335f9aeaa86644cb071861f6ef1.json.txt)\r\n\n\n### Example YAML snippet\n\n_No response_\n\n### Anything in the logs that might be useful for us?\n\n_No response_\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": "https://github.com/home-assistant/core/pull/133617", "commit_html_url": null, "file_loc": "{'base_commit': '551a584ca69771804b6f094eceb67dcb25a2f627', 'files': [{'path': 'homeassistant/components/overkiz/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [43]}, \"(None, 'async_setup_entry', 57)\": {'mod': [116, 117, 118, 119, 122]}}}, {'path': 'homeassistant/components/overkiz/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [46]}}}, {'path': 'homeassistant/components/overkiz/coordinator.py', 'status': 'modified', 'Loc': {\"('OverkizDataUpdateCoordinator', None, 36)\": {'add': [38]}, \"('OverkizDataUpdateCoordinator', '__init__', 39)\": {'add': [67], 'mod': [48, 62, 63, 64]}, \"('OverkizDataUpdateCoordinator', '_async_update_data', 69)\": {'add': [104], 'mod': [106]}, '(None, None, None)': {'add': [126], 'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["homeassistant/components/overkiz/const.py", "homeassistant/components/overkiz/__init__.py", "homeassistant/components/overkiz/coordinator.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "f7590d47641cedbf630b909aa8f53930c4a9ce5c", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/983", "iss_label": "site-bug", "title": "VRV - NoneType object is not iterable", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n\r\n- [X] I'm reporting a bug unrelated to a specific site\r\n- [X] I've verified that I'm running yt-dlp version **2021.09.02**\r\n- [X] I've checked that all provided URLs are alive and playable in a browser\r\n- [X] The provided URLs do not contain any DRM to the best of my knowledge\r\n- [X] I've checked that all URLs and arguments with special characters are properly quoted or escaped\r\n- [X] I've searched the bugtracker for similar bug reports including closed ones\r\n- [X] I've read bugs section in FAQ\r\n\r\n\r\n## Verbose log\r\n\r\n\r\n\r\n```\r\nytdl -F -u PRIVATE -p PRIVATE \"https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend\" --verbose\r\n[debug] Command-line config: ['-F', '-u', 'PRIVATE', '-p', 'PRIVATE', 'https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend', '--verbose']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8, pref cp1252\r\n[debug] yt-dlp version 2021.09.02 (exe)\r\n[debug] Python version 3.8.10 (CPython 64bit) - Windows-10-10.0.18363-SP0\r\n[debug] exe versions: ffmpeg 4.4-full_build-www.gyan.dev, ffprobe 4.4-full_build-www.gyan.dev\r\n[debug] Optional libraries: mutagen, pycryptodome, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[vrv] None: Downloading webpage\r\n[vrv] Downloading Token Credentials JSON metadata\r\n[debug] [vrv] Extracting URL: https://vrv.co/watch/GRP5G39JR/The-Seven-Heavenly-Virtues:The-Angels-Descend\r\n[vrv] GRP5G39JR: Downloading resource path JSON metadata\r\n[vrv] GRP5G39JR: Downloading CMS Signing JSON metadata\r\n[vrv] GRP5G39JR: Downloading object JSON metadata\r\n[vrv] GRP5G39JR: Downloading video JSON metadata\r\n[vrv] GRP5G39JR: Downloading streams JSON metadata\r\n[vrv] GRP5G39JR: Downloading dash-audio-en-US information\r\n[vrv] GRP5G39JR: Downloading hls-audio-en-US information\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\nERROR: 'NoneType' object is not iterable\r\nTraceback (most recent call last):\r\n File \"yt_dlp\\YoutubeDL.py\", line 1214, in wrapper\r\n File \"yt_dlp\\YoutubeDL.py\", line 1239, in __extract_info\r\n File \"yt_dlp\\extractor\\common.py\", line 584, in extract\r\n File \"yt_dlp\\extractor\\vrv.py\", line 221, in _real_extract\r\nTypeError: 'NoneType' object is not iterable\r\n```\r\n\r\n\r\n\r\n## Description\r\n\r\n\r\n\r\nI've noticed this is happening a little more often but it seems that the entire series for this one does this but then it works just fine on other series. So haven't really noticed where this is hanging up but i used `--write-pages` and got an extra dump file for this one vs. one that actually downloads, which looks like this.\r\n\r\n```\r\n\r\n \r\n \r\n \r\n VRV - Home of Your Favorite Channels\r\n \r\n
      What is Vrv?Vrv
      View All

      Watch The Best Stuff Ever

      The best in anime, gaming, tech, cartoons, +\u00a0more! Create a free account to keep watching across our apps, build a watchlist, or go premium to sync & watch videos offline.

      What's Available on VRV?\u00a0View All

      Create Account

      Create My Free Account

      Create Account
      Existing user?

      By creating an account you\u2019re agreeing to our Terms & Privacy Policy, and you confirm that you are at least 16 years of age.

      \"Miss
      \"CrunchyrollCrunchyroll

      Miss Kobayashi's Dragon Maid

      SeriesSubtitled

      Miss Kobayashi is your average office worker who lives a boring life, alone in her small apartment\u2013until she saves the life of a female dragon in distress. The dragon, named Tohru, has the ability to magically transform into an adorable human girl (albeit with horns and a long tail!), who will do anything to pay off her debt of gratitude, whether Miss Kobayashi likes it or not. With a very persistent and amorous dragon as a roommate, nothing comes easy, and Miss Kobayashi\u2019s normal life is about to go off the deep end!

      Most Popular

      TSUKIMICHI -Moonlit Fantasy-

      You\u2019ve reached the end of the feed.

      View All

      VRV Premium

      Everything on VRV, Ad-free, $9.99 +tax/mo

      Get newest episodes, exclusive series, and ad-free viewing to everything on VRV such as HarmonQuest, Dragon Ball Super, Bravest Warriors, and more!
      30-DAY FREE TRIAL
      \r\n
      \"Close

      Ancient browser detected!

      Some old stuff is cool. Stuff like Stonehenge, ancient remains,\r\n and that picture of your dad next to that sweet car. What's not\r\n cool? Old browsers. VRV doesn't work on old browsers, so it looks\r\n like it's time for an upgrade. Here are some we officially\r\n support.

      • \"Google
        Google Chrome

        Version 55+

        GET IT
      • \"Mozilla
        Mozilla Firefox

        Version 60+

        GET IT
      • \"Microsoft
        Microsoft Edge

        Version 15+

        GET IT
      • \"Apple
        Apple Safari

        Version 10+

        GET IT
      \r\n \r\n
      \r\n \r\n
      \r\n \r\n ```\r\n\r\nNot sure itf it is helpful but that's all I got for now.", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/f7590d47641cedbf630b909aa8f53930c4a9ce5c", "file_loc": "{'base_commit': 'f7590d47641cedbf630b909aa8f53930c4a9ce5c', 'files': [{'path': 'yt_dlp/extractor/vrv.py', 'status': 'modified', 'Loc': {\"('VRVIE', '_real_extract', 168)\": {'mod': [221]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/vrv.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "50e93e03a7ca6ae35a319ea310104f7d6d91eee3", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/3183", "iss_label": "geo-blocked\nsite-bug", "title": "Tele5 has an extraction error", "body": "### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Description\n\ntrying to download the curernt andromenda series:\r\n`yt-dlp -F https://tele5.de/mediathek/gene-roddenberrys-andromeda/`\r\n`[Tele5] gene-roddenberrys-andromeda: Downloading webpage`\r\n`ERROR: gene-roddenberrys-andromeda: An extractor error has occurred. (caused by KeyError('assetid')); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the \"Broken site\" issue template properly. Confirm you are on the latest ver`\r\n\r\n\n\n### Verbose log\n\n```shell\nERROR: gene-roddenberrys-andromeda: An extractor error has occurred. (caused by KeyError('assetid')); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the \"Broken site\" issue template properly. Confirm you are on the latest version using yt-dlp -U\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/common.py\", line 617, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/tele5.py\", line 81, in _real_extract\r\n asset_id, country, realm = (player_info[x] for x in ('assetid', 'locale', 'realm', ))\r\n File \"/usr/bin/yt-dlp/yt_dlp/extractor/tele5.py\", line 81, in \r\n asset_id, country, realm = (player_info[x] for x in ('assetid', 'locale', 'realm', ))\r\nKeyError: 'assetid'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/50e93e03a7ca6ae35a319ea310104f7d6d91eee3", "file_loc": "{'base_commit': '50e93e03a7ca6ae35a319ea310104f7d6d91eee3', 'files': [{'path': 'yt_dlp/YoutubeDL.py', 'status': 'modified', 'Loc': {}}, {'path': 'yt_dlp/extractor/aliexpress.py', 'status': 'modified', 'Loc': {\"('AliExpressLiveIE', None, 12)\": {'mod': [21]}}}, {'path': 'yt_dlp/extractor/applepodcasts.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 6]}, \"('ApplePodcastsIE', None, 15)\": {'add': [26], 'mod': [17, 22, 24, 25, 42, 43, 44, 45, 46]}, \"('ApplePodcastsIE', '_real_extract', 42)\": {'add': [52, 61], 'mod': [50, 56]}}}, {'path': 'yt_dlp/extractor/arte.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}, \"('ArteTVPlaylistIE', '_real_extract', 230)\": {'add': [255]}}}, {'path': 'yt_dlp/extractor/audiomack.py', 'status': 'modified', 'Loc': {\"('AudiomackIE', None, 16)\": {'add': [31]}}}, {'path': 'yt_dlp/extractor/bbc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}, \"('BBCIE', None, 604)\": {'add': [786, 791], 'mod': [796, 797, 799, 800, 801]}, \"('BBCCoUkIE', None, 39)\": {'mod': [41]}, \"('BBCCoUkIE', '_process_media_selector', 363)\": {'mod': [397, 398, 399]}, \"('BBCIE', '_real_extract', 906)\": {'mod': [1174, 1175, 1176]}, \"('BBCIE', 'parse_media', 1206)\": {'mod': [1217]}}}, {'path': 'yt_dlp/extractor/bigo.py', 'status': 'modified', 'Loc': {\"('BigoIE', '_real_extract', 30)\": {'add': [36], 'mod': [39, 47]}}}, {'path': 'yt_dlp/extractor/extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [70, 93, 308]}}}, {'path': 'yt_dlp/extractor/nuvid.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [8]}, \"('NuvidIE', None, 15)\": {'add': [22, 25, 30, 48], 'mod': [29]}, \"('NuvidIE', '_real_extract', 53)\": {'mod': [58, 59, 60, 61, 62, 63, 64, 70]}}}, {'path': 'yt_dlp/extractor/rutv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}, \"('RUTVIE', '_real_extract', 126)\": {'mod': [182]}}}, {'path': 'yt_dlp/extractor/streamcz.py', 'status': 'modified', 'Loc': {\"('StreamCZIE', None, 14)\": {'add': [24], 'mod': [34]}}}, {'path': 'yt_dlp/extractor/tele5.py', 'status': 'modified', 'Loc': {\"('Tele5IE', None, 12)\": {'add': [30], 'mod': [16, 45, 67, 68, 70, 71, 73, 74, 75, 76, 78, 80, 81, 82, 83, 84, 86, 87, 88, 90, 91, 92, 93, 94, 95, 97, 98, 99, 101, 102, 104, 105, 106, 107, 108]}, '(None, None, None)': {'mod': [4, 6, 7, 8, 10, 11, 12]}}}, {'path': 'yt_dlp/extractor/tv2dk.py', 'status': 'modified', 'Loc': {\"('TV2DKIE', '_real_extract', 79)\": {'add': [98], 'mod': [94]}, \"('TV2DKIE', None, 16)\": {'mod': [44, 45]}}}, {'path': 'yt_dlp/extractor/uol.py', 'status': 'modified', 'Loc': {\"('UOLIE', '_real_extract', 67)\": {'mod': [98]}}}, {'path': 'yt_dlp/extractor/urplay.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6, 7]}, \"('URPlayIE', None, 16)\": {'add': [28], 'mod': [26, 53, 54, 55, 56, 57]}, \"('URPlayIE', '_real_extract', 54)\": {'add': [113], 'mod': [75, 76, 77, 78, 79, 101]}}}, {'path': 'yt_dlp/extractor/videa.py', 'status': 'modified', 'Loc': {\"('VideaIE', '_real_extract', 112)\": {'mod': [149, 166, 167, 168]}}}, {'path': 'yt_dlp/extractor/vimeo.py', 'status': 'modified', 'Loc': {\"('VimeoIE', None, 297)\": {'add': [638]}}}, {'path': 'yt_dlp/extractor/wdr.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12, 24]}, \"('WDRIE', None, 25)\": {'add': [43], 'mod': [31, 39]}, \"('WDRPageIE', None, 139)\": {'add': [209, 234], 'mod': [173, 175, 177, 186, 194, 197, 248]}, \"('WDRPageIE', '_real_extract', 258)\": {'add': [273], 'mod': [293, 295, 296, 299, 300, 301, 302]}, \"('WDRElefantIE', '_real_extract', 324)\": {'add': [336]}, \"('WDRIE', '_real_extract', 47)\": {'mod': [129, 130, 132]}}}, {'path': 'yt_dlp/extractor/zdf.py', 'status': 'modified', 'Loc': {\"('ZDFIE', None, 136)\": {'add': [138], 'mod': [198, 199, 200, 201, 202, 203, 204]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/extractors.py", "yt_dlp/extractor/streamcz.py", "yt_dlp/extractor/bbc.py", "yt_dlp/extractor/zdf.py", "yt_dlp/extractor/tv2dk.py", "yt_dlp/extractor/rutv.py", "yt_dlp/extractor/aliexpress.py", "yt_dlp/extractor/wdr.py", "yt_dlp/extractor/videa.py", "yt_dlp/extractor/nuvid.py", "yt_dlp/extractor/arte.py", "yt_dlp/extractor/vimeo.py", "yt_dlp/extractor/urplay.py", "yt_dlp/extractor/bigo.py", "yt_dlp/YoutubeDL.py", "yt_dlp/extractor/applepodcasts.py", "yt_dlp/extractor/tele5.py", "yt_dlp/extractor/uol.py", "yt_dlp/extractor/audiomack.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "80e8493ee7c3083f4e215794e4a67ba5265f24f7", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2885", "iss_label": "site-request\npatch-available", "title": "Add Filmarkivet.se as a Supported Site", "body": "### Checklist\n\n- [X] I'm reporting a new site support request\n- [X] I've verified that I'm running yt-dlp version **2022.02.04**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required\n\n### Region\n\nUnited States\n\n### Example URLs\n\nhttps://www.filmarkivet.se/movies/paris-d-moll/\n\n### Description\n\nPlease add Filmarkivet.se as a supported site. I already watched the YouTube video \"The Secret Logos Of SF Studios (1919 - 1999)\" by CCGFilms, which has some SF Studios logos. I need to capture its logos.\n\n### Verbose log\n\n```shell\n[debug] Command-line config: ['-v', 'https://www.filmarkivet.se/movies/paris-d-moll/']\r\n[debug] Encodings: locale cp1252, fs utf-8, out utf-8 (No ANSI), err utf-8 (No ANSI), pref cp1252\r\n[debug] yt-dlp version 2022.02.04 [c1653e9] (win_exe)\r\n[debug] Python version 3.8.10 (CPython 64bit) - Windows-7-6.1.7601-SP1\r\n[debug] exe versions: ffmpeg N-105662-ge534d98af3-20220217 (setts), ffprobe N-105038-g30322ebe3c-sherpya\r\n[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets\r\n[debug] Proxy map: {}\r\n[debug] [generic] Extracting URL: https://www.filmarkivet.se/movies/paris-d-moll/\r\n[generic] paris-d-moll: Requesting header\r\nWARNING: [generic] Falling back on generic information extractor.\r\n[generic] paris-d-moll: Downloading webpage\r\nWARNING: [generic] URL could be a direct video link, returning it as such.\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] paris-d-moll: Downloading 1 format(s): 0\r\n[debug] Invoking downloader on \"https://www.filmarkivet.se/movies/paris-d-moll/\"\r\n[download] Destination: paris-d-moll [paris-d-moll].unknown_video\r\n[download] 100% of 373.64KiB in 00:01\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/80e8493ee7c3083f4e215794e4a67ba5265f24f7", "file_loc": "{'base_commit': '80e8493ee7c3083f4e215794e4a67ba5265f24f7', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {\"('GenericIE', None, 143)\": {'add': [2529]}}}, {'path': 'yt_dlp/utils.py', 'status': 'modified', 'Loc': {\"(None, 'is_html', 3283)\": {'add': [3292], 'mod': [3294, 3295, 3296, 3297, 3298]}, '(None, None, None)': {'mod': [3300]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/utils.py", "yt_dlp/extractor/generic.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "5da08bde9e073987d1aae2683235721e4813f9c6", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/5424", "iss_label": "site-enhancement", "title": "[VLIVE.TV] Extract release timestamp", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nIs there a way to change `upload_date` from UTC to a specific GMT? This video (https://www.vlive.tv/post/1-18318601) was posted on Nov. 28, 2018 KST (Korean Standard Time) but yt-dlp downloads it as 20181127.\r\n\r\nI know you can prefer not to use UTC for YouTube videos but don't know how for other sites.\r\n\r\nHere is my command:\r\n`!yt-dlp -vU --embed-metadata --embed-thumbnail --merge-output-format \"mkv/mp4\" --write-subs --sub-langs all,-live_chat --embed-subs --compat-options no-keep-subs \"https://www.vlive.tv/post/1-18318601\" -o \"%(upload_date)s - %(creator)s - %(title)s.%(ext)s\" -P \"/content/drive/Shareddrives/VLIVE\" -P temp:\"/content/drive/Shareddrives/VLIVE/!temp\"`\r\n\r\nThanks!\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU', '--embed-metadata', '--embed-thumbnail', '--merge-output-format', 'mkv/mp4', '--write-subs', '--sub-langs', 'all,-live_chat', '--embed-subs', '--compat-options', 'no-keep-subs', 'https://www.vlive.tv/post/1-18318601', '-o', '%(upload_date)s - %(creator)s - %(title)s.%(ext)s', '-P', '/content/drive/Shareddrives/VLIVE', '-P', 'temp:/content/drive/Shareddrives/VLIVE/!temp']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out UTF-8, error UTF-8, screen UTF-8\r\n[debug] yt-dlp version 2022.10.04 [4e0511f27]\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Compatibility options: no-keep-subs\r\n[debug] Python 3.7.15 (CPython 64bit) - Linux-5.10.133+-x86_64-with-Ubuntu-18.04-bionic (glibc 2.26)\r\n[debug] Checking exe version: ffmpeg -bsfs\r\n[debug] Checking exe version: ffprobe -bsfs\r\n[debug] exe versions: ffmpeg 3.4.11, ffprobe 3.4.11\r\n[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1706 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: 2022.10.04, Current version: 2022.10.04\r\nyt-dlp is up to date (2022.10.04)\r\n[debug] [vlive:post] Extracting URL: https://www.vlive.tv/post/1-18318601\r\n[vlive:post] 1-18318601: Downloading post JSON metadata\r\n[debug] [vlive] Extracting URL: http://www.vlive.tv/video/101216\r\n[vlive] 101216: Downloading officialVideoPost JSON metadata\r\n[vlive] 101216: Downloading inkey JSON metadata\r\n[vlive] 101216: Downloading JSON metadata\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id\r\n[info] 101216: Downloading subtitles: en_US, es_PA, es_ES, fr_FR, in_ID, pt_PT, vi_VN, jp, zh_CN, ko_KR\r\n[debug] Default format spec: bestvideo*+bestaudio/best\r\n[info] 101216: Downloading 1 format(s): avc1_720P\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_06_3/aa339b4a-f89e-11e8-bc80-3ca82a21f531-1544022097899_en_US_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt\r\n[download] 100% of 55.30KiB in 00:00:00 at 154.65KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3b1a6fdd-f30e-11e8-8111-3ca82a220799-1543410308156_es_PA_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt\r\n[download] 100% of 24.65KiB in 00:00:00 at 317.51KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/2020_11_16/a3003ab1-27e4-11eb-9a2e-0050569c085d-1605514850566_es_ES_fan.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt\r\n[download] 100% of 59.17KiB in 00:00:00 at 197.86KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/2020_12_14/8d2ea1db-3dcf-11eb-9b2a-0050569c085d-1607924720110_fr_FR_fan.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt\r\n[download] 100% of 55.22KiB in 00:00:00 at 528.98KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3ace9972-f30e-11e8-8606-3ca82a22c1e9-1543410307659_in_ID_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt\r\n[download] 100% of 24.42KiB in 00:00:00 at 157.16KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3b090ad1-f30e-11e8-9c04-3ca82a225339-1543410308041_pt_PT_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt\r\n[download] 100% of 25.00KiB in 00:00:00 at 267.88KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_07_4/dac66573-f9fc-11e8-98b0-3ca82a22d7a5-1544172503245_vi_VN_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt\r\n[download] 100% of 64.32KiB in 00:00:00 at 555.77KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_11_28_2/3aee2f6b-f30e-11e8-bb16-3ca82a21e509-1543410307868_jp_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt\r\n[download] 100% of 23.29KiB in 00:00:00 at 258.58KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_10_4/077db2f1-fc58-11e8-8818-3ca82a22c1e9-1544431564794_zh_CN_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt\r\n[download] 100% of 52.85KiB in 00:00:00 at 581.89KiB/s\r\n[info] Writing video subtitles to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt\r\n[debug] Invoking http downloader on \"http://resources-rmcnmv.pstatic.net/globalv/c/read/v2/VOD_ALPHA/global_v_2018_12_06_2/cc2e3feb-f921-11e8-8285-3ca82a2243c9-1544078418977_ko_KR_cp.vtt\"\r\n[download] Destination: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt\r\n[download] 100% of 64.29KiB in 00:00:00 at 450.71KiB/s\r\n[info] Downloading video thumbnail 1 ...\r\n[info] Writing video thumbnail 1 to: /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.png\r\n[download] /content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4 has already been downloaded\r\n[EmbedSubtitle] Embedding subtitles in \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\r\n[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt' -i 'file:/content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt' -map 0 -dn -ignore_unknown -c copy -c:s mov_text -map -0:s -map 1:0 -metadata:s:s:0 language=eng -map 2:0 -metadata:s:s:1 language=spa -map 3:0 -metadata:s:s:2 language=spa -map 4:0 -metadata:s:s:3 language=fra -map 5:0 -metadata:s:s:4 language=ind -map 6:0 -metadata:s:s:5 language=por -map 7:0 -metadata:s:s:6 language=vie -map 8:0 -metadata:s:s:7 language=jp -map 9:0 -metadata:s:s:8 language=zho -map 10:0 -metadata:s:s:9 language=kor -movflags +faststart 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.temp.mp4'\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.en_US.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.fr_FR.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_PA.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.jp.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.ko_KR.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.zh_CN.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.in_ID.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.pt_PT.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.vi_VN.vtt (pass -k to keep)\r\nDeleting original file /content/drive/Shareddrives/VLIVE/!temp/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.es_ES.vtt (pass -k to keep)\r\n[Metadata] Adding metadata to \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\r\n[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4' -map 0 -dn -ignore_unknown -c copy -write_id3v1 1 -metadata 'title=\u2665\ub3c4\uc694\uc77c\u2665 12/1 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601' -metadata date=20181127 -metadata purl=http://www.vlive.tv/video/101216 -metadata comment=http://www.vlive.tv/video/101216 -metadata 'artist=NCT\uc758 night night!' -movflags +faststart 'file:/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.temp.mp4'\r\n[EmbedThumbnail] mutagen: Adding thumbnail to \"/content/drive/Shareddrives/VLIVE/20181127 - NCT\uc758 night night! - \u2665\ub3c4\uc694\uc77c\u2665 12\u29f81 \ub3c4\ub2e4\uc81c \ub179\uc74c\ud604\uc7a5 ! with \ub3c4\uc601.mp4\"\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/HHeroin/yt-dlp/commit/5da08bde9e073987d1aae2683235721e4813f9c6", "file_loc": "{'base_commit': '5da08bde9e073987d1aae2683235721e4813f9c6', 'files': [{'path': 'yt_dlp/extractor/vlive.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15]}, \"('VLiveIE', None, 69)\": {'add': [83, 100]}, \"('VLiveIE', '_real_extract', 148)\": {'add': [171]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/vlive.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "51c22ef4e2af966d6100d0d97d9e8019022df8ad", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/2996", "iss_label": "bug", "title": "'<' not supported between instances of 'float' and 'str' and --throttled-rate error after update?", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Description\n\nAfter update I get error\r\n\r\n'<' not supported between instances of 'float' and 'str'\r\n\r\nI found out that it is somewhat related to --throttled-rate setting? When I remove it I can download from YT no issues\r\nIf I leave it, I get the following message\r\n\r\n[download] 0.0% of 714.94MiB at 499.98KiB/s ETA 24:24ERROR: '<' not supported between instances of 'float' and 'str'\r\n\n\n### Verbose log\n\n```shell\nMicrosoft Windows [Version 6.1.7601]\r\n\r\n>yt-dlp https://www.youtube.com/watch?v=XUp9pe1T-UE --throttled-rate 999k\r\n[youtube] XUp9pe1T-UE: Downloading webpage\r\n[youtube] XUp9pe1T-UE: Downloading android player API JSON\r\n[info] XUp9pe1T-UE: Downloading 1 format(s): 571+251\r\nWARNING: Requested formats are incompatible for merge and will be merged into mkv\r\n[download] Destination: 8k VIDEOS _ Beauty of Nature 8K (60 FPS) HDR UltraHD _ Sony Demo [XUp9pe1T-UE].f571.mp4\r\n[download] 0.0% of 505.86MiB at 90.90KiB/s ETA 01:34:58ERROR: '<' not supported between instances of 'float' and 'str'\r\n\r\nyt>\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/51c22ef4e2af966d6100d0d97d9e8019022df8ad", "file_loc": "{'base_commit': '51c22ef4e2af966d6100d0d97d9e8019022df8ad', 'files': [{'path': 'yt_dlp/__init__.py', 'status': 'modified', 'Loc': {\"(None, 'validate_options', 156)\": {'mod': [258]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "6f638d325e1878df304822c6bf4e231e06dae89a", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/3467", "iss_label": "docs/meta/cleanup\nhigh-priority\nregression", "title": "Error since commit 43cc91a", "body": "### Checklist\n\n- [X] I'm reporting a bug unrelated to a specific site\n- [X] I've verified that I'm running yt-dlp version **2022.04.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Description\n\nAfter commit 43cc91a, I get the error shown in the verbose log.\n\n### Verbose log\n\n```shell\nyt-dlp -Uv\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.8/runpy.py\", line 194, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"/usr/lib/python3.8/runpy.py\", line 87, in _run_code\r\n exec(code, run_globals)\r\n File \"/usr/local/bin/yt-dlp/__main__.py\", line 13, in \r\n File \"\", line 259, in load_module\r\n File \"/usr/local/bin/yt-dlp/yt_dlp/__init__.py\", line 12, in \r\nModuleNotFoundError: No module named 'yt_dlp.compat'\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/6f638d325e1878df304822c6bf4e231e06dae89a", "file_loc": "{'base_commit': '6f638d325e1878df304822c6bf4e231e06dae89a', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, 61)': {'add': [61]}, '(None, None, 64)': {'mod': [64]}, '(None, None, 68)': {'mod': [68]}, '(None, None, 70)': {'mod': [70]}}}, {'path': 'yt_dlp/extractor/anvato.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7], 'mod': [22, 23, 24, 25, 26, 27, 28, 29, 30]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/anvato.py"], "doc": [], "test": [], "config": ["Makefile"], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "14a086058a30a0748b5b716e9b21481f993518f3", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/1601", "iss_label": "site-bug", "title": "ARD:mediathek doesn't work anymore", "body": "### Checklist\n\n- [X] I'm reporting a broken site\n- [X] I've verified that I'm running yt-dlp version **2021.10.22**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))\n- [X] I've checked that all provided URLs are alive and playable in a browser\n- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required\n\n### Region\n\nGermany\n\n### Description\n\nDownloading from ARDmediathek dosen\u2019t work anymore\n\n### Verbose log\n\n```shell\n$ /repositories/yt-dlp/yt-dlp --no-config --verbose https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/\r\n[debug] Command-line config: ['--no-config', '--verbose', 'https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/']\r\n[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8\r\n[debug] yt-dlp version 2021.10.22 (zip)\r\n[debug] Lazy loading extractors is disabled\r\n[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']\r\n[debug] Python version 3.9.7 (CPython 64bit) - Linux-5.13.0-21-generic-x86_64-with-glibc2.34\r\n[debug] exe versions: ffmpeg 4.4 (setts), ffprobe 4.4, rtmpdump 2.4\r\n[debug] Optional libraries: Cryptodome, keyring, mutagen, sqlite\r\n[debug] Proxy map: {}\r\n[debug] Using fake IP 53.36.205.78 (DE) as X-Forwarded-For\r\n[debug] [ARD:mediathek] Extracting URL: https://www.ardmediathek.de/video/tagesschau-oder-tagesschau-20-00-uhr/das-erste/Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll/\r\n[ARD:mediathek] Y3JpZDovL2Rhc2Vyc3RlLmRlL3RhZ2Vzc2NoYXUvZmM4ZDUxMjgtOTE0ZC00Y2MzLTgzNzAtNDZkNGNiZWJkOTll: Downloading webpage\r\n[ARD:mediathek] 10049223: Downloading media JSON\r\nERROR: [ARD:mediathek] Unable to download JSON metadata: HTTP Error 404: Not Found (caused by ); please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; type yt-dlp -U to update. Be sure to call yt-dlp with the --verbose flag and include its complete output.\r\n File \"/repositories/yt-dlp/yt-dlp/yt_dlp/extractor/common.py\", line 713, in _request_webpage\r\n return self._downloader.urlopen(url_or_request)\r\n File \"/repositories/yt-dlp/yt-dlp/yt_dlp/YoutubeDL.py\", line 3288, in urlopen\r\n return self._opener.open(req, timeout=self._socket_timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 632, in http_response\r\n response = self.parent.error(\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 555, in error\r\n result = self._call_chain(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 494, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 747, in http_error_302\r\n return self.parent.open(new, timeout=req.timeout)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 523, in open\r\n response = meth(req, response)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 632, in http_response\r\n response = self.parent.error(\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 561, in error\r\n return self._call_chain(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 494, in _call_chain\r\n result = func(*args)\r\n File \"/usr/lib/python3.9/urllib/request.py\", line 641, in http_error_default\r\n raise HTTPError(req.full_url, code, msg, hdrs, fp)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/yt-dlp/yt-dlp/commit/14a086058a30a0748b5b716e9b21481f993518f3", "file_loc": "{'base_commit': '14a086058a30a0748b5b716e9b21481f993518f3', 'files': [{'path': 'yt_dlp/extractor/ard.py', 'status': 'modified', 'Loc': {\"('ARDBetaMediathekIE', None, 390)\": {'add': [405, 428], 'mod': [391]}, \"('ARDBetaMediathekIE', '_ARD_extract_playlist', 512)\": {'mod': [528, 529, 530, 531, 532, 533, 534, 536, 537, 538, 539, 540, 541]}, \"('ARDBetaMediathekIE', '_real_extract', 551)\": {'mod': [577]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/ard.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "ab7d4f784892c275e888d71aa80a3a2ed59d9b83", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/2019", "iss_label": "", "title": "[Bug] text of collapsed node still present ", "body": "On latest commit https://github.com/comfyanonymous/ComfyUI/commit/d66b631d74e6f6ac95c61c63d4a0da150bf74903.\r\nDragging the node also doesn't do anything until it's uncollapsed.\r\n\"Screenshot\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/comfyanonymous/ComfyUI/commit/ab7d4f784892c275e888d71aa80a3a2ed59d9b83", "file_loc": "{'base_commit': 'ab7d4f784892c275e888d71aa80a3a2ed59d9b83', 'files': [{'path': 'web/scripts/domWidget.js', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [235, 292]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["web/scripts/domWidget.js"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "3e589bf1356024fb471a9d17738e4626f21a953b", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/1153", "iss_label": "bug\ntriage", "title": "Azure Deployment Name Bug", "body": "## Policy and info\r\n - Maintainers will close issues that have been stale for 14 days if they contain relevant answers.\r\n - Adding the label \"sweep\" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/\r\n\r\n## Expected Behavior\r\n\r\nThere shouldn't be an error with the model name.\r\n\r\n## Current Behavior\r\n\r\n### Deployment name seems to mix with model name.\r\n\r\nEverything seems to work perfectly and code is being made:\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/9fd4fcf5-d78e-4179-9406-a98867a9dfc1)\r\n\r\nBut then an error pops up telling me that the model doesn't exist and it takes my Azure OpenAI deployment name and says it's not a model.\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/de5d275e-aa79-4d55-899e-ecf87d7a4261)\r\n\r\nHere is the command style I used following these instructions from here: https://gpt-engineer.readthedocs.io/en/latest/open_models.html\r\n![image](https://github.com/gpt-engineer-org/gpt-engineer/assets/145611451/987113ca-0616-4a38-9f35-ccec2cebda5d)\r\n\r\n`gpt-engineer --azure [redacted_endpoint_url] ./snake_game/ [redacted_deployment_name]`\r\n\r\n\r\n## Additional Failure Information\r\n\r\nUsing Azure OpenAI with gpt-4-turbo deployed with a different deployment name. Only installed gpt-engineer in a virtual environment.", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1170", "commit_html_url": null, "file_loc": "{'base_commit': '3e589bf1356024fb471a9d17738e4626f21a953b', 'files': [{'path': '.github/CONTRIBUTING.md', 'status': 'modified', 'Loc': {'(None, None, 114)': {'add': [114]}}}, {'path': 'gpt_engineer/core/ai.py', 'status': 'modified', 'Loc': {\"('AI', '_create_chat_model', 330)\": {'mod': [349]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/core/ai.py"], "doc": [".github/CONTRIBUTING.md"], "test": [], "config": [], "asset": []}} +{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "c4c1203fc07b2e23c3e5a5e9277266a711ab9466", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/35", "iss_label": "", "title": ".py files are not being created. I just get all_output.txt that I manually have to create from.", "body": "Hi, I absolutely love this script. This is the most accurate auto-GPT development script I have tried yet, it's so powerful!\r\n\r\nIn the demo video it shows the script creating each of the development files, in my case .py files within the workspace folder automatically. My build isn't doing this I just get an all_output.txt file with all .py files codes in one place and a single python file.\r\n\r\nHow do I ensure that GPT-Engineer automatically creates the .py files for me. Thanks", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/120", "commit_html_url": null, "file_loc": "{'base_commit': 'c4c1203fc07b2e23c3e5a5e9277266a711ab9466', 'files': [{'path': 'gpt_engineer/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, \"(None, 'parse_chat', 6)\": {'add': [11], 'mod': [6, 7, 8, 10, 13, 14, 15, 16, 17, 18, 19]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/chat_to_files.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "7b91676a0c2ccd4589a42f2cadbf1e69f93ad81b", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/1128", "iss_label": "bug\ntriage", "title": "Applying diffs failing silently ", "body": "## Expected Behavior\r\n\r\nI would expect GPT engineer to either successfully apply all diffs sent by the AI or fail in a way that lets you know which diffs have been applied, which failed, and allows you to manually salvage the failed diff parts by copy and pasting \r\n\r\n## Current Behavior\r\n\r\nThe current behaviour seems to be that it applies the sections of the diff which it can and silently throws the rest of the code away. From a users perspective it seems like everything has gone well - but in reality its only applied a portion of the diff. \r\n\r\nThis is really bad from a usability perspective - for one, a partially applied diff is obviously never going to be working code so applying it is pointless. Also, the knowledge that this is the behaviour pf gpte means i need to manually check every single output to verify its applied the whole diff which is a complete waste of time for diffs which do apply succesfully. \r\n\r\nNot applying any of the diffs at all would actually be a better outcome for me, as at least i would have a consistent workflow of copy and pasting... however a more sensible sollution is applying the diffs it can, and if it cant apply a diff for a file, not apply any change to it at all, and instead providing an error output which is convenient for the use to copy and paste manually into the file \r\n\r\n### Failure Logs\r\nI cant upload failure logs as the code im working on is sensitive", "code": null, "pr_html_url": "https://github.com/AntonOsika/gpt-engineer/pull/1138", "commit_html_url": null, "file_loc": "{'base_commit': '7b91676a0c2ccd4589a42f2cadbf1e69f93ad81b', 'files': [{'path': 'gpt_engineer/core/diff.py', 'status': 'modified', 'Loc': {\"('Diff', 'validate_and_correct', 340)\": {'mod': [357]}}}, {'path': 'tests/core/test_salvage_correct_hunks.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt_engineer/core/diff.py"], "doc": [], "test": ["tests/core/test_salvage_correct_hunks.py"], "config": [], "asset": []}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "f7bb578a1409b1f96aff534ff5ed2bd10502296f", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1527", "iss_label": "", "title": "Add copy to clipboard in plaintext for image details", "body": "Add copy to clipboard in plaintext for image details\r\n\r\nA button we can click to copy to clipboard all of the image details shown in the log output file. If not on the log page then on the app itself.\r\n\r\nThe quick copying of these settings enables us to share our work methods with others in the community more smoothly, thereby assisting them in a more efficient and effective way.\r\n\r\n![chrome_ybBu4Zoryf](https://github.com/lllyasviel/Fooocus/assets/57927413/a1ad7fa5-5a99-43e5-8420-e2c4aeb055de)\r\n\r\n![chrome_6zslF9Z3UD](https://github.com/lllyasviel/Fooocus/assets/57927413/ded36d98-377a-4130-b20f-01defbee1e6b)\r\n\r\nWhen I copy the text manually from the log file it looks like a garbled mess. See example below.\r\n\r\n```\r\nPrompt | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body\r\n-- | --\r\nNegative Prompt | \u00a0\r\nFooocus V2 Expansion | Cute troll with fluffy long spiked hair wearing a ugly Christmas sweater. snow falling down and troll village in the background. full body, intricate, elegant, highly detailed, sharp focus, illuminated, sunny, magical, scenic, artistic, true colors, deep aesthetic, very inspirational, cute, cozy, inspired, original, fine detail, professional, winning, enhanced, polished\r\nStyles | ['SAI Photographic', 'Fooocus V2', 'Artstyle Hyperrealism', 'MRE Artistic Vision']\r\nPerformance | Quality\r\nResolution | (1024, 1024)\r\nSharpness | 3\r\nGuidance Scale | 1.7\r\nADM Guidance | (1.5, 0.8, 0.3)\r\nBase Model | dreamshaperXL_turboDpmppSDEKarras.safetensors\r\nRefiner Model | None\r\nRefiner Switch | 0.5\r\nSampler | dpmpp_sde\r\nScheduler | karras\r\nSeed | 5044578018584347060\r\nVersion | v2.1.853\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/lllyasviel/Fooocus/commit/f7bb578a1409b1f96aff534ff5ed2bd10502296f", "file_loc": "{'base_commit': 'f7bb578a1409b1f96aff534ff5ed2bd10502296f', 'files': [{'path': 'fooocus_version.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}, {'path': 'modules/async_worker.py', 'status': 'modified', 'Loc': {\"(None, 'handler', 116)\": {'mod': [400, 401, 780, 782]}}}, {'path': 'modules/private_logger.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, \"(None, 'log', 21)\": {'add': [38, 61], 'mod': [42, 60]}}}, {'path': 'update_log.md', 'status': 'modified', 'Loc': {'(None, None, 1)': {'add': [0]}}}, {'path': 'webui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 14, 111, 512], 'mod': [103]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/private_logger.py", "webui.py", "modules/async_worker.py", "fooocus_version.py"], "doc": ["update_log.md"], "test": [], "config": [], "asset": []}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/2561", "iss_label": "enhancement", "title": "[Feature Request]: Prompt embedded LoRAs", "body": "### Is there an existing issue for this?\r\n\r\n- [x] I have searched the existing issues and checked the recent builds/commits\r\n\r\n### What would your feature do?\r\n\r\nSimilar to how A1111 handles LoRAs by default, I believe there should be an option to embed LoRAs in the prompt by using the following structure:\r\n```csharp\r\n\r\n```\r\n\r\nThe current workflow works well, but has a few limitations, namely being able to use wildcards and LoRAs together for more dynamic prompts. Additionally, this feature already exists for embeddings, so I reckon adding it for LoRAs should be trivial.\r\n\r\n### Proposed workflow\r\n\r\n1. Enter LoRAs in the prompt using the `` structure\r\n2. Generate images, and LoRAs are loaded for each iteration\r\n\r\n### Additional information\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/2323", "commit_html_url": null, "file_loc": "{'base_commit': '3a55e7e3910b8ae58f82a5a0e4c11d7d4fa3143f', 'files': [{'path': 'modules/async_worker.py', 'status': 'modified', 'Loc': {\"(None, 'handler', 134)\": {'add': [435], 'mod': [155, 453, 454, 655, 865, 908, 912]}, \"(None, 'worker', 19)\": {'mod': [47, 50, 51, 72]}, \"(None, 'callback', 806)\": {'mod': [810]}}}, {'path': 'modules/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [23], 'mod': [11]}}}, {'path': 'modules/sdxl_styles.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5, 7, 12]}, \"(None, 'apply_wildcards', 68)\": {'mod': [68, 69, 70, 71, 72, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 91, 92, 95]}, \"(None, 'get_words', 95)\": {'mod': [104]}}}, {'path': 'modules/util.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 16], 'mod': [1]}, \"(None, 'get_files_from_folder', 166)\": {'mod': [166, 167, 168, 170, 172, 173, 174, 175, 176, 177, 178, 179, 180, 182]}, \"('PromptStyle', None, 358)\": {'mod': [358]}, \"(None, 'get_enabled_loras', 396)\": {'mod': [397]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/async_worker.py", "modules/sdxl_styles.py", "modules/config.py", "modules/util.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "8e62a72a63b30a3067d1a1bc3f8d226824bd9283", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1671", "iss_label": "bug (AMD)", "title": "Cannot use image prompts", "body": "I am trying to use 2x images as an image prompt but when I press generate this is what I'm getting (I can generate just fine without image prompts):\r\n\r\nFull console log:\r\n\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 3\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 1.5\r\n[Parameters] Seed = 953753918774495193\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 6 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\nmodel_type EPS\r\nUNet ADM Dimension 2816\r\nUsing split attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing split attention in VAE\r\nextra {'cond_stage_model.clip_l.logit_scale', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.text_projection'}\r\nBase model loaded: H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['None', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\models\\checkpoints\\realisticStockPhoto_v10.safetensors].\r\nRequested to load SDXLClipModel\r\nLoading 1 new model\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Image processing ...\r\nTraceback (most recent call last):\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 806, in worker\r\n handler(task)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\modules\\async_worker.py\", line 647, in handler\r\n task[0] = ip_adapter.preprocess(cn_img, ip_adapter_path=ip_adapter_path)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\ip_adapter.py\", line 185, in preprocess\r\n cond = image_proj_model.model(cond).to(device=ip_adapter.load_device, dtype=ip_adapter.dtype)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 117, in forward\r\n latents = attn(x, latents) + latents\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\Fooocus\\extras\\resampler.py\", line 55, in forward\r\n latents = self.norm2(latents)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\modules\\normalization.py\", line 190, in forward\r\n return F.layer_norm(\r\n File \"H:\\Programs\\Fooocus_win64_2-1-831\\python_embeded\\lib\\site-packages\\torch\\nn\\functional.py\", line 2515, in layer_norm\r\n return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)\r\nRuntimeError: Expected all tensors to be on the same device, but found at least two devices, privateuseone:0 and cpu!\r\nTotal time: 37.40 seconds\r\n\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1678", "commit_html_url": null, "file_loc": "{'base_commit': '8e62a72a63b30a3067d1a1bc3f8d226824bd9283', 'files': [{'path': 'extras/ip_adapter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10], 'mod': [5]}, \"(None, 'load_ip_adapter', 90)\": {'mod': [119, 120, 121, 122, 123, 124, 125, 126]}}}, {'path': 'fooocus_version.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fooocus_version.py", "extras/ip_adapter.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d57afc88a48359bc1642c2ae30a091f0426eff43", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1063", "iss_label": "", "title": "Faceswap crashes ", "body": "**Describe the problem**\r\nThe program crashes when trying to use an image as prompt and selecting the faceswap advanced option\r\n\r\n**Full Console Log**\r\nRequirement already satisfied: pygit2==1.12.2 in /usr/local/lib/python3.10/dist-packages (1.12.2)\r\nRequirement already satisfied: cffi>=1.9.1 in /usr/local/lib/python3.10/dist-packages (from pygit2==1.12.2) (1.16.0)\r\nRequirement already satisfied: pycparser in /usr/local/lib/python3.10/dist-packages (from cffi>=1.9.1->pygit2==1.12.2) (2.21)\r\n/content\r\nfatal: destination path 'Fooocus' already exists and is not an empty directory.\r\n/content/Fooocus\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py', '--preset', 'realistic', '--share']\r\nLoaded preset: /content/Fooocus/presets/realistic.json\r\nPython 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]\r\nFooocus version: 2.1.824\r\nRunning on local URL: http://127.0.0.1:7865/\r\nRunning on public URL: https://fb6371be5d9ced0c1d.gradio.live/\r\n\r\nThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run `gradio deploy` from Terminal to deploy to Spaces (https://huggingface.co/spaces)\r\nTotal VRAM 15102 MB, total RAM 12983 MB\r\n2023-11-29 21:03:50.202601: E tensorflow/compiler/xla/stream_executor/cuda/cuda_dnn.cc:9342] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered\r\n2023-11-29 21:03:50.202658: E tensorflow/compiler/xla/stream_executor/cuda/cuda_fft.cc:609] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered\r\n2023-11-29 21:03:50.202708: E tensorflow/compiler/xla/stream_executor/cuda/cuda_blas.cc:1518] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered\r\n2023-11-29 21:03:52.244376: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\nSet vram state to: NORMAL_VRAM\r\nDisabling smart memory management\r\nDevice: cuda:0 Tesla T4 : native\r\nVAE dtype: torch.float32\r\nUsing pytorch cross attention\r\nRefiner unloaded.\r\nmodel_type EPS\r\nadm 2816\r\nUsing pytorch attention in VAE\r\nWorking with z of shape (1, 4, 32, 32) = 4096 dimensions.\r\nUsing pytorch attention in VAE\r\nextra keys {'cond_stage_model.clip_l.text_projection', 'cond_stage_model.clip_g.transformer.text_model.embeddings.position_ids', 'cond_stage_model.clip_l.logit_scale'}\r\nBase model loaded: /content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors\r\nRequest to load LoRAs [['SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors', 0.25], ['None', 1.0], ['None', 1.0], ['None', 1.0], ['None', 1.0]] for model [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors].\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for UNet [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 788 keys at weight 0.25.\r\nLoaded LoRA [/content/Fooocus/models/loras/SDXL_FILM_PHOTOGRAPHY_STYLE_BetaV0.4.safetensors] for CLIP [/content/Fooocus/models/checkpoints/realisticStockPhoto_v10.safetensors] with 264 keys at weight 0.25.\r\nFooocus V2 Expansion: Vocab with 642 words.\r\nFooocus Expansion engine loaded for cuda:0, use_fp16 = True.\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.30 seconds\r\nApp started successful. Use the app with http://127.0.0.1:7865/ or 127.0.0.1:7865 or https://fb6371be5d9ced0c1d.gradio.live/\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 604471590939558783\r\n[Parameters] Sampler = dpmpp_2m_sde_gpu - karras\r\n[Parameters] Steps = 60 - 30\r\n[Fooocus] Initializing ...\r\n[Fooocus] Loading models ...\r\nRefiner unloaded.\r\n[Fooocus] Processing prompts ...\r\n[Fooocus] Preparing Fooocus text #1 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full light, gorgeous, amazing, elegant, intricate, highly detailed, dynamic, rich deep vivid colors, beautiful, very inspirational, inspiring, thought, fancy, sharp focus, colorful, epic, professional, artistic, new, charismatic, cool, brilliant, awesome, attractive, shiny, fine detail, pretty, focused, creative\r\n[Fooocus] Preparing Fooocus text #2 ...\r\n[Prompt Expansion] Portrait of a young man on the beach, full pretty, attractive, fine detail, intricate, elegant, luxury, elite, dramatic light, highly detailed, cinematic, complex, sharp focus, illuminated, amazing, marvelous, thought, epic, fabulous, colorful, shiny, brilliant, symmetry, great, excellent composition, ambient, dynamic, vibrant colors, relaxed, beautiful\r\n[Fooocus] Encoding positive #1 ...\r\n[Fooocus Model Management] Moving model(s) has taken 0.11 seconds\r\n[Fooocus] Encoding positive #2 ...\r\n[Fooocus] Encoding negative #1 ...\r\n[Fooocus] Encoding negative #2 ...\r\n[Parameters] Denoising Strength = 1.0\r\n[Parameters] Initial Latent shape: Image Space (1152, 896)\r\nPreparation time: 3.60 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.40 seconds\r\n100% 60/60 [00:55<00:00, 1.09it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 60.73 seconds\r\n[Sampler] refiner_swap_method = joint\r\n[Sampler] sigma_min = 0.0291671771556139, sigma_max = 14.614643096923828\r\nRequested to load SDXL\r\nLoading 1 new model\r\n[Fooocus Model Management] Moving model(s) has taken 2.01 seconds\r\n100% 60/60 [00:56<00:00, 1.06it/s]\r\nImage generated with private log at: /content/Fooocus/outputs/2023-11-29/log.html\r\nGenerating and saving time: 61.85 seconds\r\nRequested to load SDXLClipModel\r\nRequested to load GPT2LMHeadModel\r\nLoading 2 new models\r\n[Fooocus Model Management] Moving model(s) has taken 1.57 seconds\r\nTotal time: 131.21 seconds\r\n[Parameters] Adaptive CFG = 7\r\n[Parameters] Sharpness = 2\r\n[Parameters] ADM Scale = 1.5 : 0.8 : 0.3\r\n[Parameters] CFG = 3.0\r\n[Parameters] Seed = 7513856776859948774\r\n[Fooocus] Downloading control models ...\r\n[Fooocus] Loading control models ...\r\nextra keys clip vision: ['vision_model.embeddings.position_ids']\r\n", "code": null, "pr_html_url": "https://github.com/lllyasviel/Fooocus/pull/1710", "commit_html_url": null, "file_loc": "{'base_commit': 'd57afc88a48359bc1642c2ae30a091f0426eff43', 'files': [{'path': 'fooocus_colab.ipynb', 'status': 'modified', 'Loc': {'(None, None, 15)': {'mod': [15]}}}, {'path': 'readme.md', 'status': 'modified', 'Loc': {'(None, None, 127)': {'add': [127]}, '(None, None, 118)': {'mod': [118]}, '(None, None, 124)': {'mod': [124]}}}, {'path': 'ldm_patched/modules/args_parser.py', 'Loc': {'(None, None, None)': [99]}, 'base_commit': 'cca0ca704a713ab153938e78de6787609c723cad'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fooocus_colab.ipynb", "ldm_patched/modules/args_parser.py"], "doc": ["readme.md"], "test": [], "config": [], "asset": []}} +{"organization": "odoo", "repo_name": "odoo", "base_commit": "72ec0050b442214c9be93907fc01a48832243c15", "iss_html_url": "https://github.com/odoo/odoo/issues/7306", "iss_label": "", "title": "[v8.0] Bank statement : Customer Import invoice wizard do not auto-fill the right field", "body": "Step to reproduce:\n\ncreate a customer invoice\ncreate a new bank statement and import this invoice\nclick on 'Reconcile'\nProblem: No match proposition between the bank statement line and the invoice move line can be found since the communication field is '/'. (The invoice number is in the field 'Reference' instead)\n\nSo please the ref must go to communication\n\nThanks\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/odoo/odoo/commit/72ec0050b442214c9be93907fc01a48832243c15", "file_loc": "{'base_commit': '72ec0050b442214c9be93907fc01a48832243c15', 'files': [{'path': 'addons/account/account_bank_statement.py', 'status': 'modified', 'Loc': {\"('account_bank_statement_line', 'get_reconciliation_proposition', 537)\": {'mod': [575]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["addons/account/account_bank_statement.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "197287fc303119bf71caf9b3f72280cab08da749", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1147", "iss_label": "", "title": "[Bug]: \u7ffb\u8bd1arxiv\u6587\u6863\u62a5\u9519\uff0c\u65e0\u8bba\u672c\u5730\u81ea\u5df1\u642d\u5efa\u8fd8\u662f\u5b98\u65b9\u5728\u7ebf\u5747\u62a5\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOneKeyInstall (\u4e00\u952e\u5b89\u88c5\u811a\u672c-windows)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n\u5b98\u65b9\u5728\u7ebf\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \"./toolbox.py\", line 165, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 249, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt)\r\n> File \"./crazy_functions/Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 141, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \"./toolbox.py\", line 507, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"/usr/lib/python3.8/tarfile.py\", line 1608, in open\r\n> raise ReadError(\"file could not be opened successfully\")\r\n> tarfile.ReadError: file could not be opened successfully\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://localhost:7890, \u4ee3\u7406\u6240\u5728\u5730\uff1aJapan\r\n\r\n\u672c\u5730\u642d\u5efa\u7248\u62a5\u9519\u4ee3\u7801\u5982\u4e0b\uff1a\r\n\r\n> [Local Message] \u5b9e\u9a8c\u6027\u51fd\u6570\u8c03\u7528\u51fa\u9519:\r\n> \r\n> Traceback (most recent call last):\r\n> File \".\\toolbox.py\", line 150, in decorated\r\n> yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 250, in Latex\u7ffb\u8bd1\u4e2d\u6587\u5e76\u91cd\u65b0\u7f16\u8bd1PDF\r\n> txt, arxiv_id = yield from arxiv_download(chatbot, history, txt, allow_cache)\r\n> File \".\\crazy_functions\\Latex\u8f93\u51faPDF\u7ed3\u679c.py\", line 139, in arxiv_download\r\n> extract_archive(file_path=dst, dest_dir=extract_dst)\r\n> File \".\\toolbox.py\", line 461, in extract_archive\r\n> with tarfile.open(file_path, 'r:*') as tarobj:\r\n> File \"D:\\academic-gpt\\installer_files\\env\\lib\\tarfile.py\", line 1811, in open\r\n> raise ReadError(f\"file could not be opened successfully:\\n{error_msgs_summary}\")\r\n> tarfile.ReadError: file could not be opened successfully:\r\n> - method gz: ReadError('invalid header')\r\n> - method bz2: ReadError('not a bzip2 file')\r\n> - method xz: ReadError('not an lzma file')\r\n> - method tar: ReadError('invalid header')\r\n> \r\n> \u5f53\u524d\u4ee3\u7406\u53ef\u7528\u6027:\r\n> \r\n> \u4ee3\u7406\u914d\u7f6e socks5h://127.0.0.1:12341, \u4ee3\u7406\u6240\u5728\u5730\uff1aHong Kong - Cloudflare, Inc.\r\n\r\n\u6240\u7ffb\u8bd1\u7684arxiv\u6587\u6863\u7684\u5730\u5740\u4e3a\uff1ahttps://arxiv.org/abs/2112.10551\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![msedge_HmO7O9M6OT](https://github.com/binary-husky/gpt_academic/assets/10786234/51e6ff95-9b95-47cd-b671-322aa1808389)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/197287fc303119bf71caf9b3f72280cab08da749", "file_loc": "{'base_commit': '197287fc303119bf71caf9b3f72280cab08da749', 'files': [{'path': 'shared_utils/handle_upload.py', 'status': 'modified', 'Loc': {\"(None, 'extract_archive', 91)\": {'mod': [107, 108, 109, 110, 111, 112, 113, 114, 116, 117]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["shared_utils/handle_upload.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "65317e33af87640b68c84c9f6ee67188b76c6d7a", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/558", "iss_label": "", "title": "\u80fd\u5426\u5229\u7528EdgeGPT\uff0c\u652f\u6301\u8c03\u7528\u5fae\u8f6fBing\u63a5\u53e3", "body": "\u5927\u4f6c\u4eec\u6c42\u6c42\u4e86\uff0c\u770b\u770b\u8fd9\u4e2a\u9879\u76ee\u5427\uff0chttps://github.com/acheong08/EdgeGPT\r\n\u5982\u679c\u53ef\u4ee5\u65b9\u4fbf\u5730\u8c03\u7528Bing\u63a5\u53e3\uff0c\u6216\u8005\u672a\u6765\u7684\u767e\u5ea6\u3001\u963f\u91cc\u7b49\u7b2c\u4e09\u65b9\u63a5\u53e3\uff0c\u5bf9\u4e8e\u6ca1\u6709openAI-key\u4e5f\u6ca1\u6cd5\u672c\u5730\u90e8\u7f72GLM\u7684\u540c\u5b66\u662f\u798f\u97f3\u554a", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/65317e33af87640b68c84c9f6ee67188b76c6d7a", "file_loc": "{'base_commit': '65317e33af87640b68c84c9f6ee67188b76c6d7a', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [65], 'mod': [47, 48]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21, 119]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e359fff0405c4cb865b809b4ecfc0a95a54d2512", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1554", "iss_label": "", "title": "[Bug]: docker\u5b89\u88c5\u7248\u672c\u9002\u914dspark api\u62a5\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nDocker-Compose\uff08Windows/Mac\uff09\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u5728mac\u672c\u5730\u4f7f\u7528conda\u5b89\u88c5\u65b9\u5f0f\uff0c\u9002\u914dspark api\u53ef\u4ee5\u6b63\u5e38\u8fd0\u884c\u3002\u4f46\u662f\u901a\u8fc7docker compose\u65b9\u5f0f\u5b89\u88c5\u4e4b\u540e\u901a\u8fc7spark api\u4f1a\u51fa\u73b0\u62a5\u9519\uff0c\u4e0d\u8fc7\u5343\u5e06api\u5219\u53ef\u4ee5\u6b63\u5e38\u4f7f\u7528\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n\r\n\"Snipaste_2024-02-14_21-12-27\"\r\n\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\ngpt_academic_nolocalllms-1 | error: Connection to remote host was lost.\r\ngpt_academic_nolocalllms-1 | Exception ignored in thread started by: .run at 0x2aaaf7fdfa60>\r\ngpt_academic_nolocalllms-1 | Traceback (most recent call last):\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/com_sparkapi.py\", line 113, in run\r\ngpt_academic_nolocalllms-1 | ws.send(data)\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/websocket/_app.py\", line 284, in send\r\ngpt_academic_nolocalllms-1 | raise WebSocketConnectionClosedException(\"Connection is already closed.\")\r\ngpt_academic_nolocalllms-1 | websocket._exceptions.WebSocketConnectionClosedException: Connection is already closed.\r\ngpt_academic_nolocalllms-1 | Traceback (most recent call last):\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/routes.py\", line 422, in run_predict\r\ngpt_academic_nolocalllms-1 | output = await app.get_blocks().process_api(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/blocks.py\", line 1323, in process_api\r\ngpt_academic_nolocalllms-1 | result = await self.call_function(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/blocks.py\", line 1067, in call_function\r\ngpt_academic_nolocalllms-1 | prediction = await utils.async_iteration(iterator)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 336, in async_iteration\r\ngpt_academic_nolocalllms-1 | return await iterator.__anext__()\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 329, in __anext__\r\ngpt_academic_nolocalllms-1 | return await anyio.to_thread.run_sync(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/to_thread.py\", line 56, in run_sync\r\ngpt_academic_nolocalllms-1 | return await get_async_backend().run_sync_in_worker_thread(\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 2134, in run_sync_in_worker_thread\r\ngpt_academic_nolocalllms-1 | return await future\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/anyio/_backends/_asyncio.py\", line 851, in run\r\ngpt_academic_nolocalllms-1 | result = context.run(func, *args)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/usr/local/lib/python3.11/site-packages/gradio/utils.py\", line 312, in run_sync_iterator_async\r\ngpt_academic_nolocalllms-1 | return next(iterator)\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^^^^^^^\r\ngpt_academic_nolocalllms-1 | File \"/gpt/toolbox.py\", line 115, in decorated\r\ngpt_academic_nolocalllms-1 | yield from f(txt_passon, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, system_prompt, *args)\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/bridge_all.py\", line 765, in predict\r\ngpt_academic_nolocalllms-1 | yield from method(inputs, llm_kwargs, *args, **kwargs)\r\ngpt_academic_nolocalllms-1 | File \"/gpt/request_llms/bridge_spark.py\", line 60, in predict\r\ngpt_academic_nolocalllms-1 | if response == f\"[Local Message] \u7b49\u5f85{model_name}\u54cd\u5e94\u4e2d ...\":\r\ngpt_academic_nolocalllms-1 | ^^^^^^^^\r\ngpt_academic_nolocalllms-1 | UnboundLocalError: cannot access local variable 'response' where it is not associated with a value\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/e359fff0405c4cb865b809b4ecfc0a95a54d2512", "file_loc": "{'base_commit': 'e359fff0405c4cb865b809b4ecfc0a95a54d2512', 'files': [{'path': 'request_llms/bridge_qianfan.py', 'status': 'modified', 'Loc': {\"(None, 'predict', 135)\": {'add': [148, 151], 'mod': [161, 162, 163, 164, 165, 166]}}}, {'path': 'request_llms/bridge_qwen.py', 'status': 'modified', 'Loc': {\"(None, 'predict', 25)\": {'add': [53]}}}, {'path': 'request_llms/bridge_skylark2.py', 'status': 'modified', 'Loc': {\"(None, 'predict', 32)\": {'add': [58]}}}, {'path': 'request_llms/bridge_spark.py', 'status': 'modified', 'Loc': {\"(None, 'predict', 36)\": {'add': [54]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_qwen.py", "request_llms/bridge_qianfan.py", "request_llms/bridge_skylark2.py", "request_llms/bridge_spark.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "c17fc2a9b55b1c7447718a06a3eac4378828bb22", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1021", "iss_label": "waiting feedback", "title": "[Feature]: \u901a\u4e49\u5343\u95ee\u7684\u6a21\u578b\u5f00\u6e90\u4e86,\u5efa\u8bae\u52a0\u5165.", "body": "### Class | \u7c7b\u578b\n\nNone\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\n\u9644\uff1a\u5f00\u6e90\u5730\u5740\r\n\r\n\u9b54\u642dModelScope\uff1a\r\n\r\nhttps://modelscope.cn/models/qwen/Qwen-7B/summary\r\n\r\nhttps://modelscope.cn/models/qwen/Qwen-7B-Chat/summary\r\n\r\nHugging Face\uff1ahttps://huggingface.co/Qwen\r\n\r\nGitHub\uff1ahttps://github.com/QwenLM/Qwen-7B", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/c17fc2a9b55b1c7447718a06a3eac4378828bb22", "file_loc": "{'base_commit': 'c17fc2a9b55b1c7447718a06a3eac4378828bb22', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [337]}}}, {'path': 'request_llm/bridge_qwen.py', 'status': 'modified', 'Loc': {\"('GetONNXGLMHandle', 'load_model_and_tokenizer', 26)\": {'mod': [35, 37, 38, 39, 40]}, \"('GetONNXGLMHandle', None, 19)\": {'mod': [43, 57, 58]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "request_llm/bridge_qwen.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "19bd0c35ed05e6f99c8e3c0a8c994b1385341cae", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1053", "iss_label": "ToDo", "title": "[Bug]: \u672c\u5730\u7ffb\u8bd1Latex\u51fa\u9519", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n* \u95ee\u9898\uff1a\u627e\u4e0d\u5230\u6240\u8c13\u7684\u201cfp\u201d\uff08\u6587\u4ef6\u6307\u9488\uff09\r\n![image](https://github.com/binary-husky/gpt_academic/assets/62052010/40949aca-4ebf-4b24-ade6-a8423654b228)\r\n\r\n\r\n* stack\uff1a\u8fd9\u91cc\u662f\u5c06\u5728**tex\u6587\u4ef6\u5408\u5e76\uff08merge_tex_files\uff09** \u51fd\u6570\u4e2d\u7684\u4e00\u4e2a\u5b50\u51fd\u6570\u7684\u8c03\u7528\uff08merge_tex_files_\uff09\uff0c\u4e3b\u8981\u4f5c\u7528\u5c31\u662f\u5c06\u539f\u59cbtex\u4e2d\u7684\\input\u547d\u4ee4\u5185\u5bb9\u8fdb\u884c\u5408\u5e76\uff0c\u4f46\u5b9e\u9645\u8fc7\u7a0b\u4e2d\u5b58\u5728\u4e00\u4e2a\u95ee\u9898\uff0c\u901a\u8fc7debug\u627e\u5230\uff0c\u5177\u4f53\u7684debug\u4ee3\u7801\uff08\u4e5f\u5c31\u52a0\u4e86\u70b9print\uff09\u548c\u7ed3\u679c\u56fe\u9644\u5728\u4e86\u4e0b\u9762\r\n\r\n```python\r\ndef merge_tex_files_(project_foler, main_file, mode):\r\n \"\"\"\r\n Merge Tex project recrusively\r\n \"\"\"\r\n main_file = rm_comments(main_file)\r\n for s in reversed([q for q in re.finditer(r\"\\\\input\\{(.*?)\\}\", main_file, re.M)]):\r\n f = s.group(1)\r\n fp = os.path.join(project_foler, f)\r\n fp = find_tex_file_ignore_case(fp)\r\n if fp:\r\n with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()\r\n else:\r\n raise RuntimeError(f'\u627e\u4e0d\u5230{fp}\uff0cTex\u6e90\u6587\u4ef6\u7f3a\u5931\uff01')\r\n c = merge_tex_files_(project_foler, c, mode)\r\n main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]\r\n return main_file\r\n```\r\n\r\n**debug\u7684\u4ee3\u7801**\r\n```python\r\ndef merge_tex_files_(project_foler, main_file, mode):\r\n \"\"\"\r\n Merge Tex project recrusively\r\n \"\"\"\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print('======== IN merge_tex_files_(SUB FUN) Function ===========')\r\n # print('project_foler:{}\\nmain_file:{}\\nmode:{}'.format(project_foler,main_file,mode))\r\n ## ===\r\n\r\n main_file = rm_comments(main_file)\r\n for s in reversed([q for q in re.finditer(r\"\\\\input\\{(.*?)\\}\", main_file, re.M)]):\r\n ## === AAS ADDED FOR TEST ===\r\n print('======== IN LOOP of merge_tex_files_(SUB FUN)===========')\r\n print(\"s:\",s)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n f = s.group(1)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"f:\",f)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n fp = os.path.join(project_foler, f)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"fp1:\",fp)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n fp = find_tex_file_ignore_case(fp)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print(\"fp2:\",fp)\r\n ## === AAS ADDED FOR TEST === \r\n\r\n if fp:\r\n with open(fp, 'r', encoding='utf-8', errors='replace') as fx: c = fx.read()\r\n else:\r\n raise RuntimeError(f'\u627e\u4e0d\u5230{fp}\uff0cTex\u6e90\u6587\u4ef6\u7f3a\u5931\uff01')\r\n c = merge_tex_files_(project_foler, c, mode)\r\n main_file = main_file[:s.span()[0]] + c + main_file[s.span()[1]:]\r\n return main_file\r\n```\r\n\r\n**\u7ed3\u679c\u56fe**\r\n![image](https://github.com/binary-husky/gpt_academic/assets/62052010/a0e9b9c7-e416-4396-8ff8-5321807d23f7)\r\n\r\n**\u51fa\u9519\u90e8\u5206\u7684\u4ee3\u7801**\r\n```python\r\ndef find_tex_file_ignore_case(fp):\r\n dir_name = os.path.dirname(fp)\r\n base_name = os.path.basename(fp)\r\n\r\n ## === AAS ADDED FOR TEST ===\r\n print('============ IN find_tex_file_ignore_case Fun ==========')\r\n print('dir_name:',dir_name)\r\n print('base_name',base_name)\r\n ## === AAS ADDED FOR TEST ===\r\n\r\n ## \u51fa\u9519\u7684\u95ee\u9898\u5728\u4e8e\u662fbbl\u6587\u4ef6\u5bfc\u5165\uff0c\u800c\u4e0d\u662ftex\uff0c\u5c1d\u8bd5\u4e00\u4e0b\u53d6\u6d88tex\u9650\u5236\r\n if not base_name.endswith('.tex'): base_name+='.tex'\r\n ## === AAS ADDED FOR TEST ===\r\n \r\n if os.path.exists(pj(dir_name, base_name)): return pj(dir_name, base_name)\r\n # go case in-sensitive\r\n import glob\r\n for f in glob.glob(dir_name+'/*.tex'):\r\n base_name_s = os.path.basename(fp)\r\n if base_name_s.lower() == base_name.lower(): return f\r\n return None\r\n```\r\n\r\n* \u5b9e\u9645\u9519\u8bef\u539f\u56e0\uff1a\u7b80\u5355\u800c\u8a00\u5c31\u662f\uff1a\u8fd9\u4e2a\u5730\u65b9**\u53ea\u8003\u8651\u5230\u4e86tex\u5408\u5e76**\uff08`find_tex_file_ignore_case`\u51fd\u6570\uff09\uff0c\u8fd8\u5b58\u5728**\u6bd4\u5982`bbl`**\uff08\u53e6\u4e00\u79cd\u4e00\u79cd\u6587\u732e\u5f15\u7528\u7684\u683c\u5f0f\uff0c\u53ef\u4ee5\u76f4\u63a5\u585e\u5230tex\u91cc\u9762\u7f16\u8bd1\uff0c\u76f8\u5bf9\u6bd4\u8f83\u539f\u59cb\u4f46\u5c0f\u5de7\uff0c\u548c\u666e\u901a\u7684`.bib`\u5229\u7528`references`\u7a0d\u6709\u5dee\u5f02\uff09**\u6ca1\u6709\u8003\u8651\u5230**\uff0c\u6240\u4ee5\u9020\u6210\u4e86\u5408\u5e76\u8fc7\u7a0b\u4e2d\u7684tex\u6587\u4ef6\u7f3a\u5931\r\n\r\n* \u6539\u8fdb\u5efa\u8bae\uff1ainput\u8fd9\u4e2a\u5730\u65b9\u6765\u8bf4\u4e00\u822c\u786e\u5b9e\u53ea\u6709tex\uff0c\u7528find_tex_file_ignore_case\u8fd9\u4e2a\u51fd\u6570\u4e5f\u633a\u597d\u7684\uff0c\u4f46\u662f\u53ef\u4ee5\u8003\u8651\u4ee5\u4e0b\u5176\u4ed6\u60c5\u51b5\uff0c\u6bd4\u5982\u8bf4\u7eaf\u6587\u672c\uff08.txt)\uff0c\u5176\u4ed6code\uff08`.c, .cpp, .py`\u7b49\u7b49\uff09\uff0c\r\n\r\n- \u65b9\u6848\uff1a**\u76f4\u63a5\u53bb\u6389tex\u7684\u9650\u5236**\uff0c\u4ec0\u4e48\u90fd\u76f4\u63a5\u5f80\u91cc\u63d2\u5165\u5373\u53ef\uff0c\u7136\u540e\u4ea4\u7ed9tex\u7f16\u8bd1\uff0c\u5b9e\u9645\u4e0a\u4e5f\u662f\u8fd9\u6837\uff0c\u6240\u4ee5\u6ca1\u6709\u4ec0\u4e48\u5fc5\u8981\u5728input\u8fd9\u4e2a\u5730\u65b9\u628a\u53ea\u9650\u5236\u63d2\u5165tex\u3002\u5b9e\u5728\u6709\u9519\u8bef\u7684\u8bdd\u5176\u5b9e\u4ea4\u7ed9tex\u8f93\u51fa\u7136\u540e\u67e5\u770b\u5c31\u597d\u4e86\u3002\u4ee3\u7801\u65b9\u9762\u628a\u4e0b\u9762\u8fd9\u884c\u6ce8\u91ca\u6389\u5c31\u597d\u4e86\r\n```python\r\nif not base_name.endswith('.tex'): base_name+='.tex'\r\n```\r\n\r\n\r\n* p.s \u627e\u8fd9\u4e2a\u8fd8\u633a\u8d39\u4e8b\u548chhh\uff0c\u4e4d\u4e00\u770b\u8fd8\u4e0d\u77e5\u9053\u4ec0\u4e48\u60c5\u51b5\uff0c\u4f46\u5176\u5b9e\u5c0f\u95ee\u9898\r\n\r\n\u5728\u6ce8\u91ca\u6389\u4e4b\u540e\uff0c\u6682\u4e14\u5c31\u80fd\u6b63\u5e38\u4f7f\u7528\u4e86\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\nSee the Describe the bug part\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\nSee the Describe the bug part", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/19bd0c35ed05e6f99c8e3c0a8c994b1385341cae", "file_loc": "{'base_commit': '19bd0c35ed05e6f99c8e3c0a8c994b1385341cae', 'files': [{'path': 'crazy_functions/latex_fns/latex_toolbox.py', 'status': 'modified', 'Loc': {\"(None, 'find_tex_file_ignore_case', 281)\": {'add': [283], 'mod': [286]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["crazy_functions/latex_fns/latex_toolbox.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e24f077b68e38b679e5ca25853ea2c402f074ea3", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1120", "iss_label": "", "title": "[Feature]: \u5e0c\u671b\u80fd\u591f\u589e\u52a0azure openai gpt4 \u7684\u6a21\u578b\u9009\u9879", "body": "### Class | \u7c7b\u578b\n\n\u7a0b\u5e8f\u4e3b\u4f53\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\nRT", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gpt_academic/commit/e24f077b68e38b679e5ca25853ea2c402f074ea3", "file_loc": "{'base_commit': 'e24f077b68e38b679e5ca25853ea2c402f074ea3', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [83]}}}, {'path': 'request_llm/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [147]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llm/bridge_all.py", "config.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "a799f769e4c48908c3efd64792384403392f2e82", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/67", "iss_label": "", "title": "Cluster faces during extract using dlib.chinese_whispers_clustering", "body": "I have had some success hacking together a pre-processing script to run over my training images. It uses [dlib.chinese_whispers_clustering](http://dlib.net/python/index.html#dlib.chinese_whispers_clustering) to group the found faces in the training data based on likeness. I think one of the keys to good results is good training sets, and this helps to prevent polluting the training data with other peoples faces as tends to be the case with Google image search sets or images with multiple faces.\r\n\r\nThere are a couple of ways I think this could be integrated into the project:\r\n\r\n1) during extract when generating face chips, discard non target faces (all faces not in the largest cluster)\r\n2) during convert where frames have multiple faces, identifying only the target face for replacement.\r\n\r\nHere's [the script](https://gist.github.com/badluckwiththinking/92dd6f155bc8babca6422b08b642d35d), sorry its a bit hacky, I just wanted something that worked and haven't cleaned it up. I'm not sure where I would begin to integrate it into the project, perhaps as an alternative plugin?\r\n\r\n", "code": null, "pr_html_url": "https://github.com/deepfakes/faceswap/pull/61", "commit_html_url": null, "file_loc": "{'base_commit': 'a799f769e4c48908c3efd64792384403392f2e82', 'files': [{'path': 'Dockerfile', 'status': 'modified', 'Loc': {'(None, None, 14)': {'add': [14]}, '(None, None, 10)': {'mod': [10, 11, 12]}, '(None, None, 16)': {'mod': [16]}, '(None, None, 18)': {'mod': [18]}}}, {'path': 'faceswap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 8, 17, 18, 19, 20]}}}, {'path': 'lib/DetectedFace.py', 'status': 'removed', 'Loc': {}}, {'path': 'lib/aligner.py', 'status': 'modified', 'Loc': {\"(None, 'get_align_mat', 25)\": {'mod': [26]}}}, {'path': 'lib/cli.py', 'status': 'modified', 'Loc': {\"('DirectoryProcessor', 'process_arguments', 34)\": {'add': [47], 'mod': [49, 51]}, '(None, None, None)': {'mod': [5]}, \"('DirectoryProcessor', 'process_directory', 51)\": {'mod': [56, 59]}, \"('DirectoryProcessor', None, 14)\": {'mod': [62]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [3, 4, 28]}, \"(None, 'detect_faces', 6)\": {'mod': [9, 11, 12, 13, 14, 15, 16]}}}, {'path': 'lib/model.py', 'status': 'removed', 'Loc': {}}, {'path': 'lib/training_data.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 3, 5], 'mod': [45]}, \"(None, 'get_training_data', 13)\": {'mod': [13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 26, 27, 29]}, \"(None, 'random_warp', 47)\": {'mod': [49]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16], 'mod': [1, 2]}, \"(None, 'get_folder', 8)\": {'mod': [10]}, \"(None, 'load_images', 18)\": {'mod': [18, 19, 20, 21, 22, 23, 24, 25, 26]}}}, {'path': 'plugins/Convert_Adjust.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, \"('Convert', None, 5)\": {'mod': [6, 7]}, \"('Convert', 'patch_image', 12)\": {'mod': [21]}}}, {'path': 'plugins/Convert_Masked.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [6]}, \"('Convert', None, 8)\": {'mod': [9, 10]}, \"('Convert', 'get_new_face', 51)\": {'mod': [54]}, \"('Convert', 'get_image_mask', 58)\": {'mod': [67]}}}, {'path': 'plugins/Extract_Align.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, \"('Extract', 'extract', 6)\": {'add': [7]}}}, {'path': 'plugins/Extract_Crop.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}}}, {'path': 'plugins/PluginLoader.py', 'status': 'modified', 'Loc': {\"('PluginLoader', None, 2)\": {'mod': [4, 5, 6, 9, 10, 11, 14, 15]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 64], 'mod': [7, 8, 9]}, \"('ConvertImage', 'process_image', 38)\": {'add': [48], 'mod': [42, 43, 44, 45, 47, 50, 51, 52, 53, 54, 57]}, \"('ConvertImage', None, 13)\": {'mod': [38, 39, 40]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}, \"('ExtractTrainingData', None, 8)\": {'mod': [18, 19]}, \"('ExtractTrainingData', 'process_image', 18)\": {'mod': [22, 23, 24, 25, 26, 28, 29, 30, 31]}}}, {'path': 'scripts/train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10], 'mod': [5, 6, 8, 9]}, \"('TrainingProcessor', 'process_arguments', 18)\": {'mod': [24, 25, 26, 27, 28, 29, 30]}, \"('TrainingProcessor', None, 12)\": {'mod': [89, 90, 91, 92, 93, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 107, 108, 109, 111, 113, 114, 115, 116]}, \"('TrainingProcessor', 'process', 118)\": {'mod': [119, 122, 123, 125, 127, 129, 131, 132, 133, 134, 135, 136, 138, 139, 140, 142, 143, 144, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/aligner.py", "lib/model.py", "lib/training_data.py", "plugins/Convert_Adjust.py", "plugins/Extract_Align.py", "plugins/Extract_Crop.py", "scripts/train.py", "faceswap.py", "plugins/PluginLoader.py", "plugins/Convert_Masked.py", "lib/DetectedFace.py", "lib/faces_detect.py", "lib/utils.py", "lib/cli.py", "scripts/convert.py", "scripts/extract.py"], "doc": [], "test": [], "config": ["Dockerfile"], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "f5dd18352c6640bc5c39a01642c7ac7356c0dea1", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/718", "iss_label": "bug", "title": "[Windows] cuda_path was not set if success on first check.", "body": "**Describe the bug**\r\nsetup.py file:\r\ncuDNN was not detected if `cuda_check` success in first check using \"nvcc -V\" because of `self.env.cuda_path` not set\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1, run `python setup.py` on windows 10 environment\r\n\r\n**Expected behavior**\r\ndetect cuDNN lib\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Desktop (please complete the following information):**\r\n - OS: Windows 10\r\n\r\n**Additional context**\r\nI temporary disable first method to check CUDA so it working for now.\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/f5dd18352c6640bc5c39a01642c7ac7356c0dea1", "file_loc": "{'base_commit': 'f5dd18352c6640bc5c39a01642c7ac7356c0dea1', 'files': [{'path': 'lib/gpu_stats.py', 'status': 'modified', 'Loc': {\"('GPUStats', 'initialize', 64)\": {'mod': [92]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {\"('Checks', None, 314)\": {'add': [353]}, \"('Checks', 'cudnn_check', 458)\": {'add': [459]}, \"('Install', 'ask_continue', 542)\": {'add': [543]}, \"('Checks', 'cuda_check_linux', 423)\": {'mod': [442, 443, 444]}, \"('Checks', 'cuda_check_windows', 445)\": {'mod': [451]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py", "lib/gpu_stats.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "dea984efc1c720832d7c32513c806b4b67cc6560", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/590", "iss_label": "", "title": "Disable logging", "body": "In previous commits before the logging implementation, multiple GPUS were able to run different tasks simultaneously ( extract/train/convert ).\r\n\r\nAfter the logging commit, only 1 task can be run due to the log file being in use by the first process.\r\n\r\nIs there an option to disable logging or specify a log file instead?", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/deepfakes/faceswap/commit/dea984efc1c720832d7c32513c806b4b67cc6560", "file_loc": "{'base_commit': 'dea984efc1c720832d7c32513c806b4b67cc6560', 'files': [{'path': 'lib/cli.py', 'status': 'modified', 'Loc': {\"('ScriptExecutor', 'execute_script', 83)\": {'mod': [85]}, \"('DirOrFileFullPaths', None, 150)\": {'mod': [150]}, \"('FaceSwapArgs', 'get_global_arguments', 265)\": {'mod': [274, 275, 276, 277]}}}, {'path': 'lib/gui/utils.py', 'status': 'modified', 'Loc': {\"('FileHandler', '__init__', 36)\": {'mod': [48, 49, 50, 51, 57, 58]}, \"('ContextMenu', None, 332)\": {'mod': [334]}}}, {'path': 'lib/logger.py', 'status': 'modified', 'Loc': {\"(None, 'log_setup', 71)\": {'mod': [71, 79]}, \"(None, 'file_handler', 89)\": {'mod': [89, 91, 92, 93]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/cli.py", "lib/gui/utils.py", "lib/logger.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181", "iss_html_url": "https://github.com/3b1b/manim/issues/1436", "iss_label": "bug", "title": "PNG images have a black background (no transparency)", "body": "### Description\r\nWhen trying do display a png image(with transparent background), it shows the background as black, didn't encouter the issue when trying with the cairo renderer.\r\n\r\n**Code**:\r\n```python\r\n img = ImageMobject(\"./dice.png\")\r\n self.play(FadeIn(img))\r\n```\r\n\r\n\r\n### Results\r\n\"result\"\r\n\r\n# Original image\r\n![dice](https://user-images.githubusercontent.com/38077008/110259246-8fdb3400-7f9e-11eb-992f-b658762c5830.png)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/3b1b/manim/commit/b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181", "file_loc": "{'base_commit': 'b4b4d39ec51cbfce7fafdc5ff0f9f4ddfd26b181', 'files': [{'path': 'manimlib/shaders/image/frag.glsl', 'status': 'modified', 'Loc': {'(None, None, 12)': {'mod': [12]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["manimlib/shaders/image/frag.glsl"]}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "e1c049bece420bc1190eb3ed4d5d9878c431aa5e", "iss_html_url": "https://github.com/3b1b/manim/issues/394", "iss_label": "", "title": "import readline is failing", "body": "I am trying to run examples_scenes.py and it threw a ModuleNotFoundError when it tried to import readline. This should be easy to resolve - just pip install readline right? Nope. readline apparently doesn't work on Windows, and I got this strange follow-up error below. I don't know what to do at this point. Help?\r\n\r\n\r\nc:\\Tensorexperiments\\manim>python -m manim example_scenes.py SquareToCircle -pl\r\nTraceback (most recent call last):\r\n File \"C:\\Program Files\\Python36\\lib\\runpy.py\", line 193, in _run_module_as_main\r\n \"__main__\", mod_spec)\r\n File \"C:\\Program Files\\Python36\\lib\\runpy.py\", line 85, in _run_code\r\n exec(code, run_globals)\r\n File \"c:\\Tensorexperiments\\manim\\manim.py\", line 4, in \r\n import manimlib.stream_starter\r\n File \"c:\\Tensorexperiments\\manim\\manimlib\\stream_starter.py\", line 4, in \r\n import readline\r\nModuleNotFoundError: No module named 'readline'\r\n\r\nc:\\Tensorexperiments\\manim>pip install readline\r\nCollecting readline\r\n Downloading https://files.pythonhosted.org/packages/f4/01/2cf081af8d880b44939a5f1b446551a7f8d59eae414277fd0c303757ff1b/readline-6.2.4.1.tar.gz (2.3MB)\r\n 100% |\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.3MB 8.5MB/s\r\n Complete output from command python setup.py egg_info:\r\n error: this module is not meant to work on Windows\r\nCommand \"python setup.py egg_info\" failed with error code 1 in C:\\Users\\SAMERN~1\\AppData\\Local\\Temp\\pip-install-z8maklzo\\readline\\", "code": null, "pr_html_url": "https://github.com/3b1b/manim/pull/672", "commit_html_url": null, "file_loc": "{'base_commit': 'e1c049bece420bc1190eb3ed4d5d9878c431aa5e', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 11)': {'add': [11]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "660d1d1e64c5e28e96bf9b8172cd87d1d809fd07", "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/5876", "iss_label": "bug\nseverity:medium", "title": "[Bug]: \"The model produces invalid content\"", "body": "### Is there an existing issue for the same bug?\r\n\r\n- [X] I have checked the existing issues.\r\n\r\n### Describe the bug and reproduction steps\r\n\r\nhttps://www.all-hands.dev/share?share_id=dab4a77e7d64e7a4dc6124dc672d3f4beb2d411a33155977425b821e292d4f4c\r\nThe LLM is `gpt-4o`\r\nIn the logs I got\r\n```yaml\r\n{'error': {'message': 'The model produced invalid content. Consider modifying your prompt if you are seeing this error persistently.', 'type': 'model_error', 'param': None, 'code': None}}\r\n```\r\n\r\n### OpenHands Installation\r\n\r\nDocker command in README\r\n\r\n### OpenHands Version\r\n\r\n0.17\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Logs, Errors, Screenshots, and Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/All-Hands-AI/OpenHands/pull/7045", "commit_html_url": null, "file_loc": "{'base_commit': '660d1d1e64c5e28e96bf9b8172cd87d1d809fd07', 'files': [{'path': 'openhands/llm/llm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [78]}, \"('LLM', 'wrapper', 180)\": {'mod': [220, 221, 222]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["openhands/llm/llm.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "scrapy", "repo_name": "scrapy", "base_commit": "7e8453cf1ec992e5df5cebfeda08552c58e7c9bc", "iss_html_url": "https://github.com/scrapy/scrapy/issues/2656", "iss_label": "", "title": "sos filepipelines 302", "body": "hi\r\n\r\n when i setting file_urls \"http://m.baidu.com/api?action=redirect&token=kpyysd&from=1014090y&type=app&dltype=new&refid=2650327114&tj=soft_5845028_88031597_%E8%AF%AD%E9%9F%B3%E6%90%9C%E7%B4%A2&refp=action_search&blink=da5b687474703a2f2f7265736765742e39312e636f6d2f536f66742f436f6e74726f6c6c65722e617368783f616374696f6e3d646f776e6c6f61642674706c3d312669643d34313034393931c658&crversion=1\"\r\n \r\n this url redirect 3 times so when i use scrap download it the scrapy retrun 302 how can i setting it can working ? please help me!\r\n![qq 20170316182328](https://cloud.githubusercontent.com/assets/3350372/23991950/1795e90a-0a76-11e7-9b19-4128bfdb3914.png)\r\n\r\n \r\n ", "code": null, "pr_html_url": "https://github.com/scrapy/scrapy/pull/2616", "commit_html_url": null, "file_loc": "{'base_commit': '7e8453cf1ec992e5df5cebfeda08552c58e7c9bc', 'files': [{'path': 'docs/topics/media-pipeline.rst', 'status': 'modified', 'Loc': {'(None, None, 324)': {'add': [324]}}}, {'path': 'scrapy/pipelines/files.py', 'status': 'modified', 'Loc': {\"('FilesPipeline', '__init__', 226)\": {'mod': [252]}}}, {'path': 'scrapy/pipelines/media.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 7]}, \"('MediaPipeline', None, 16)\": {'add': [29, 95], 'mod': [27]}, \"('MediaPipeline', '_check_media_to_download', 96)\": {'mod': [106]}}}, {'path': 'tests/mockserver.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 10, 14, 122]}, \"('Root', '__init__', 152)\": {'add': [162]}}}, {'path': 'tests/test_pipeline_media.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 84]}, \"('BaseMediaPipelineTestCase', None, 22)\": {'add': [24]}, \"('MediaPipelineTestCase', 'test_use_media_to_download_result', 245)\": {'add': [251]}, \"('BaseMediaPipelineTestCase', 'setUp', 26)\": {'mod': [28]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/pipelines/media.py", "scrapy/pipelines/files.py", "tests/mockserver.py"], "doc": ["docs/topics/media-pipeline.rst"], "test": ["tests/test_pipeline_media.py"], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "cc00f21a358923c03e334e245d58df0853d10661", "iss_html_url": "https://github.com/ansible/ansible/issues/57069", "iss_label": "networking\nmodule\nsupport:network\nnxos\nbug\naffects_2.7\ncisco", "title": "nxos_vpc breaks using default vrf", "body": "##### SUMMARY\r\nWhen using pkl_vrf\": \"default\" command is missing vrf value\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\nModule: nxos_vpc\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible 2.7.2\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]\r\n```\r\n\r\n##### CONFIGURATION\r\nasterisk due privacy\r\n```\r\nCACHE_PLUGIN(/etc/ansible/ansible.cfg) = jsonfile\r\nCACHE_PLUGIN_CONNECTION(/etc/ansible/ansible.cfg) = /**/config/ansible/facts\r\nDEFAULT_HOST_LIST(/etc/ansible/ansible.cfg) = [u'/**/config/ansible/**/hosts.yml']\r\nDISPLAY_SKIPPED_HOSTS(/etc/ansible/ansible.cfg) = True\r\nHOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False\r\nRETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\n```\r\n nxos_vpc:\r\n domain: 10\r\n pkl_src: 1.1.1.2\r\n pkl_dest: 1.1.1.1\r\n pkl_vrf: default\r\n```\r\n\r\n##### EXPECTED RESULTS\r\n```\r\n \"commands\": [\r\n \"vpc domain 10\",\r\n \"peer-keepalive destination 1.1.1.1 source 1.1.1.2 vrf default\",\r\n```\r\n##### ACTUAL RESULTS\r\n```\r\n \"commands\": [\r\n \"vpc domain 10\",\r\n \"peer-keepalive destination 1.1.1.1 source 1.1.1.2\",\r\n```\r\n\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/57370", "commit_html_url": null, "file_loc": "{'base_commit': 'cc00f21a358923c03e334e245d58df0853d10661', 'files': [{'path': 'lib/ansible/modules/network/nxos/nxos_vpc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [60, 63, 277]}, \"(None, 'main', 317)\": {'add': [396], 'mod': [392]}, \"(None, 'get_vpc', 222)\": {'mod': [265, 266, 267, 268, 269, 270, 271, 272, 273, 274]}, \"(None, 'get_commands_to_config_vpc', 278)\": {'mod': [288]}}}, {'path': 'test/units/modules/network/nxos/test_nxos_vpc.py', 'status': 'modified', 'Loc': {\"('TestNxosVpcModule', 'setUp', 31)\": {'add': [33]}, \"('TestNxosVpcModule', 'tearDown', 40)\": {'add': [41]}, \"('TestNxosVpcModule', 'load_fixtures', 45)\": {'add': [54], 'mod': [56]}, \"('TestNxosVpcModule', 'test_nxos_vpc_present', 58)\": {'add': [66]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/network/nxos/nxos_vpc.py"], "doc": [], "test": ["test/units/modules/network/nxos/test_nxos_vpc.py"], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "44b53141748d29220441e0799b54ea3130ac6753", "iss_html_url": "https://github.com/ansible/ansible/issues/78076", "iss_label": "support:core\nhas_pr\ndocs\naffects_2.12", "title": "Minor change to the getting started diagram", "body": "### Summary\n\nI was looking through the new Ansible getting started guide and noticed one of the nodes in the diagram has a duplicate label. s/node 2/node 3\n\n### Issue Type\n\nDocumentation Report\n\n### Component Name\n\nhttps://github.com/ansible/ansible/blob/devel/docs/docsite/rst/images/ansible_basic.svg\n\n### Ansible Version\n\n```console\n$ ansible --version\r\nansible [core 2.12.6]\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/home/dnaro/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.10/site-packages/ansible\r\n ansible collection location = /home/dnaro/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/bin/ansible\r\n python version = 3.10.4 (main, Mar 25 2022, 00:00:00) [GCC 12.0.1 20220308 (Red Hat 12.0.1-0)]\r\n jinja version = 3.0.3\r\n libyaml = True\n```\n\n\n### Configuration\n\n```console\n$ ansible-config dump --only-changed -t all\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n:...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n~\r\n(END)...skipping...\r\n\r\nBECOME:\r\n======\r\n\r\nCACHE:\r\n=====\r\n\r\nCALLBACK:\r\n========\r\n\r\nCLICONF:\r\n=======\r\n\r\nCONNECTION:\r\n==========\r\n\r\nHTTPAPI:\r\n=======\r\n\r\nINVENTORY:\r\n=========\r\n\r\nLOOKUP:\r\n======\r\n\r\nNETCONF:\r\n=======\r\n\r\nSHELL:\r\n=====\r\n\r\nVARS:\r\n====\r\n~\r\n~\r\n~\r\n~\n```\n\n\n### OS / Environment\n\nFedora 36\n\n### Additional Information\n\nIt corrects something that is wrong.\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/78077", "commit_html_url": null, "file_loc": "{'base_commit': '44b53141748d29220441e0799b54ea3130ac6753', 'files': [{'path': 'docs/docsite/rst/images/ansible_basic.svg', 'status': 'modified', 'Loc': {'(None, None, 27)': {'mod': [27, 28, 29]}, '(None, None, 35)': {'mod': [35]}, '(None, None, 51)': {'mod': [51]}, '(None, None, 67)': {'mod': [67]}, '(None, None, 192)': {'mod': [192]}, '(None, None, 194)': {'mod': [194, 195, 196, 197, 198, 199, 200]}, '(None, None, 203)': {'mod': [203]}, '(None, None, 205)': {'mod': [205]}, '(None, None, 207)': {'mod': [207]}, '(None, None, 209)': {'mod': [209]}, '(None, None, 211)': {'mod': [211]}, '(None, None, 213)': {'mod': [213]}, '(None, None, 215)': {'mod': [215]}, '(None, None, 217)': {'mod': [217]}, '(None, None, 219)': {'mod': [219]}, '(None, None, 221)': {'mod': [221]}, '(None, None, 223)': {'mod': [223, 224, 225, 226]}, '(None, None, 230)': {'mod': [230]}, '(None, None, 233)': {'mod': [233]}, '(None, None, 236)': {'mod': [236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323]}, '(None, None, 326)': {'mod': [326, 327, 328, 329, 330, 331, 332]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/docsite/rst/images/ansible_basic.svg"], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "0335d05f437eb59bcb77a58ef7819562f298ba79", "iss_html_url": "https://github.com/ansible/ansible/issues/3730", "iss_label": "", "title": "ansible stacktrace", "body": "simple ansible facts now stack trace:\n\n```\nansible -m setup -c local -i ~/hosts 127.0.0.1\n```\n\n127.0.0.1 | FAILED => Traceback (most recent call last):\n File \"/home/bcoca/work/ansible/lib/ansible/runner/**init**.py\", line 367, in _executor\n exec_rc = self._executor_internal(host, new_stdin)\n File \"/home/bcoca/work/ansible/lib/ansible/runner/__init__.py\", line 389, in _executor_internal\n host_variables = self.inventory.get_variables(host)\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/__init__.py\", line 284, in get_variables\n self._vars_per_host[hostname] = self._get_variables(hostname)\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/__init__.py\", line 294, in _get_variables\n vars_results = [ plugin.run(host) for plugin in self._vars_plugins ]\n File \"/home/bcoca/work/ansible/lib/ansible/inventory/vars_plugins/group_vars.py\", line 43, in run\n self.pb_basedir = os.path.abspath(inventory.playbook_basedir())\n File \"/usr/lib/python2.7/posixpath.py\", line 343, in abspath\n if not isabs(path):\n File \"/usr/lib/python2.7/posixpath.py\", line 53, in isabs\n return s.startswith('/')\nAttributeError: 'NoneType' object has no attribute 'startswith'\n\nbisect showed 16efb45735899737aacc106f89014ee9551fd625 as culprit\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ansible/ansible/commit/0335d05f437eb59bcb77a58ef7819562f298ba79", "file_loc": "{'base_commit': '0335d05f437eb59bcb77a58ef7819562f298ba79', 'files': [{'path': 'lib/ansible/inventory/vars_plugins/group_vars.py', 'status': 'modified', 'Loc': {\"('VarsModule', 'run', 38)\": {'mod': [43]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/inventory/vars_plugins/group_vars.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "f841c2803a1e36bb6f392c466d36b669f9243464", "iss_html_url": "https://github.com/ansible/ansible/issues/77073", "iss_label": "module\nsupport:core\nfeature\nP3\naffects_2.13", "title": "Add support for deb822 apt sources with apt_repository", "body": "### Summary\n\nDebian has deprecated APT's original `sources.list` file format. As of Debian 11 (and Ubuntu 20.10), APT uses [the newer \"DEB822\" format](https://manpages.debian.org/unstable/apt/sources.list.5.en.html#DEB822-STYLE_FORMAT) by default. This format has been supported since APT 1.1, which goes back to Ubuntu 16.04 and Debian 9. \r\n\r\nAnsible should generate DEB822 `.sources` files instead of legacy `.list` files on supported systems.\n\n### Issue Type\n\nFeature Idea\n\n### Component Name\n\napt_repository\n\n### Additional Information\n\nHere's an example of the deb822 format:\r\n\r\n```\r\nTypes: deb\r\nURIs: http://deb.debian.org/debian\r\nSuites: bullseye\r\nComponents: main contrib non-free\r\n```\r\n\r\nThe `apt_repository` module can behave a lot more like the `yum_repository` one with this new format.\r\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/80018", "commit_html_url": null, "file_loc": "{'base_commit': 'f841c2803a1e36bb6f392c466d36b669f9243464', 'files': [{'path': 'test/integration/targets/setup_deb_repo/tasks/main.yml', 'status': 'modified', 'Loc': {'(None, None, 61)': {'add': [61]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["test/integration/targets/setup_deb_repo/tasks/main.yml"], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "6a71aef6c5b2f4c26d5f6522cd5b1a85cd78ee6b", "iss_html_url": "https://github.com/ansible/ansible/issues/58126", "iss_label": "networking\npython3\nmodule\nsupport:network\nbug\naffects_2.8\nios\ncisco", "title": "ios_facts module not enumerating ansible_net_model in Ansible 2.8", "body": "\r\n\r\n\r\n\r\n##### SUMMARY\r\n\r\nios_facts module not enumerating ansible_net_model in Ansible 2.8\r\n\r\n##### ISSUE TYPE\r\n- Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\nios_facts\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```paste below\r\nansible 2.8.1\r\n config file = /home/ryan/test/ansible.cfg\r\n configured module search path = ['/home/ryan/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.6/site-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.6.8 (default, Apr 25 2019, 21:02:35) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\n $ ansible-config dump --only-changed\r\nDEFAULT_GATHERING(/home/ryan/test/ansible.cfg) = explicit\r\nDEFAULT_HOST_LIST(/home/ryan/test/ansible.cfg) = ['/home/ryan/test/inventory']\r\nDEPRECATION_WARNINGS(/home/ryan/test/ansible.cfg) = False\r\nHOST_KEY_CHECKING(/home/ryan/test/ansible.cfg) = False\r\nPERSISTENT_COMMAND_TIMEOUT(/home/ryan/test/ansible.cfg) = 30\r\nPERSISTENT_CONNECT_TIMEOUT(/home/ryan/test/ansible.cfg) = 30\r\nRETRY_FILES_ENABLED(/home/ryan/test/ansible.cfg) = False\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\n\r\nHost OS: CentOS 7 virtual machine (VMware player)\r\nPython versions: Reproducible on 2.7.5 and 3.6\r\n\r\nTested on:\r\nCSR1000v running IOS-XE 16.09.03\r\nISR4331 running IOS-XE 16.06.03\r\nCatalyst 3850 running IOS-XE 03.06.03E\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nRun playbook to gather ios_facts, ansible_net_model is not included in any subset. Should always be included, per: https://docs.ansible.com/ansible/latest/modules/ios_facts_module.html\r\n\r\n\r\n```yaml\r\n name: IOS Facts gathering\r\n hosts: CSRTEST\r\n connection: network_cli\r\n gather_facts: yes\r\n tasks:\r\n - name: Gather facts from device\r\n ios_facts:\r\n gather_subset: all\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\nExpecting ansible_net_model back as one of the facts gathered.\r\n\r\n##### ACTUAL RESULTS\r\n\r\n\r\n```paste below\r\nTASK [Gather facts from device] ****************************************************************************************************************************************************************************************************************************************\r\ntask path: /home/ryan/test/test_facts.yml:6\r\n attempting to start connection\r\n using connection plugin network_cli\r\n found existing local domain socket, using it!\r\n updating play_context for connection\r\n\r\n local domain socket path is /home/ryan/.ansible/pc/5485150d9c\r\n ESTABLISH LOCAL CONNECTION FOR USER: ryan\r\n EXEC /bin/sh -c '( umask 77 && mkdir -p \"` echo /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379 `\" && echo ansible-tmp-1561047456.179465-161563255687379=\"` echo /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379 `\" ) && sleep 0'\r\nUsing module file /usr/local/lib/python3.6/site-packages/ansible/modules/network/ios/ios_facts.py\r\n PUT /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/tmp6gh5jigs TO /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py\r\n EXEC /bin/sh -c 'chmod u+x /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/ /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py && sleep 0'\r\n EXEC /bin/sh -c '/usr/bin/python /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/AnsiballZ_ios_facts.py && sleep 0'\r\n EXEC /bin/sh -c 'rm -f -r /home/ryan/.ansible/tmp/ansible-local-228682ct6pdp_/ansible-tmp-1561047456.179465-161563255687379/ > /dev/null 2>&1 && sleep 0'\r\nok: [CSRTEST] => {\r\n \"ansible_facts\": {\r\n \"ansible_net_all_ipv4_addresses\": [\r\n \"192.168.102.133\"\r\n ],\r\n \"ansible_net_all_ipv6_addresses\": [],\r\n \"ansible_net_api\": \"cliconf\",\r\n \"ansible_net_config\": \"!\\n! Last configuration change at 16:13:51 UTC Thu Jun 20 2019\\n!\\nversion 16.9\\nservice timestamps debug datetime msec\\nservice timestamps log datetime msec\\nplatform qfp utilization monitor load 80\\nno platform punt-keepalive disable-kernel-core\\nplatform console virtual\\n!\\nhostname CSRTEST\\n!\\nboot-start-marker\\nboot-end-marker\\n!\\n!\\n!\\nno aaa new-model\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nlogin on-success log\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nsubscriber templating\\n! \\n! \\n! \\n! \\n!\\nmultilink bundle-name authenticated\\n!\\n!\\n!\\n!\\n!\\ncrypto pki trustpoint TP-self-signed-3768273344\\n enrollment selfsigned\\n subject-name cn=IOS-Self-Signed-Certificate-3768273344\\n revocation-check none\\n rsakeypair TP-self-signed-3768273344\\n!\\n!\\ncrypto pki certificate chain TP-self-signed-3768273344\\n certificate self-signed 01\\n 30820330 30820218 A0030201 02020101 300D0609 2A864886 F70D0101 05050030 \\n 31312F30 2D060355 04031326 494F532D 53656C66 2D536967 6E65642D 43657274 \\n 69666963 6174652D 33373638 32373333 3434301E 170D3139 30363230 31363134 \\n 30395A17 0D333030 31303130 30303030 305A3031 312F302D 06035504 03132649 \\n 4F532D53 656C662D 5369676E 65642D43 65727469 66696361 74652D33 37363832 \\n 37333334 34308201 22300D06 092A8648 86F70D01 01010500 0382010F 00308201 \\n 0A028201 0100891F 68316AAF AF54176F 7D9C39F5 E34FB187 F4D88C88 8265FDE9 \\n B3A338A1 FADD5622 1A2887D2 1E655477 9EDEA72C 94EAB9C4 744C428C 83BC30A1 \\n E18B6EBC 69856EC8 4F5E8649 9D442076 3544F7D1 01AC0B0B 76E9CBE1 AEFA2C4A \\n 4EB0EE8B 29895287 97A9C7CC 586A0241 19DC79E9 35A415A5 7D976DAB 7E072350 \\n C2617E80 F8DB84D1 CFC0EBE5 3194A8C4 2E7AAC3C 7F97D423 2B016D97 C12164A6 \\n D75B73E8 A9EA96ED 079CAB76 2B8DEA2E BBB61836 C913E020 B0F7659D DA4CF838 \\n 7FCC72B5 522932D6 37196DD2 2897D197 BD6FD0C0 576CED54 85A7C94B 029BC4A3 \\n F0D7F7CC 4AAFC50A 297B6E6E ECF97699 2062D939 38DD585D E78A2794 40381513 \\n 75AEAA98 F8550203 010001A3 53305130 0F060355 1D130101 FF040530 030101FF \\n 301F0603 551D2304 18301680 147DF3A5 74A80322 7F0D4A33 C839CE1E 479BCFD0 \\n 8C301D06 03551D0E 04160414 7DF3A574 A803227F 0D4A33C8 39CE1E47 9BCFD08C \\n 300D0609 2A864886 F70D0101 05050003 82010100 87C47448 FAE908F7 47B564D7 \\n 992A8E16 24966357 D0B864AB B32BB538 6A5371F3 0BF093E8 D0E461AC 2ED99B84 \\n 768E700C A88464AA B8E0B774 2308D4A2 881495B7 AFE1F6D7 3D25AFEE 2A7D6653 \\n 6814B4AC E4189640 15C0003E 1E1EE9B1 6E3FF371 448CA017 DA622BCD 49EF07C5 \\n FB4D6859 208FF4FE 29AEB2F3 BB9BA26E 1D140B6A B2C4DADA 913D4846 84370AF0 \\n A67E3D78 F0E9CE1E 9D344542 2732C2A7 70A50162 B32BBE36 BF3382AD 641DB7A6 \\n 1AE1FD10 2CFEC3A6 1ACCD4FD 58E48276 9F2417F4 1871A9F7 11C61604 09E4BBEB \\n 2D821D14 815A48FC 7B14A7C2 8766F1B1 7C04112A 139DB760 EFF339D0 1BA82B52 \\n 5E85BBA9 3FC49134 4FEDD944 BA27F4A4 1317652C\\n \\tquit\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\nlicense udi pid CSR1000V sn 9U4DE1R3P2Y\\nlicense boot level ax\\nno license smart enable\\ndiagnostic bootup level minimal\\n!\\nspanning-tree extend system-id\\n!\\n!\\n!\\nusername ansible privilege 15 secret 5 $1$Ax9o$F2JTz/1dXjNSB21muGqxU1\\n!\\nredundancy\\n!\\n!\\n!\\n!\\n!\\n!\\n! \\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n!\\n! \\n! \\n!\\n!\\ninterface GigabitEthernet1\\n ip address dhcp\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\ninterface GigabitEthernet2\\n no ip address\\n shutdown\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\ninterface GigabitEthernet3\\n no ip address\\n shutdown\\n negotiation auto\\n no mop enabled\\n no mop sysid\\n!\\nip forward-protocol nd\\nip http server\\nip http authentication local\\nip http secure-server\\nip route 0.0.0.0 0.0.0.0 GigabitEthernet1 dhcp\\n!\\nip ssh version 2\\n!\\n!\\n!\\n!\\n!\\ncontrol-plane\\n!\\n!\\n!\\n!\\n!\\n!\\nline con 0\\n stopbits 1\\nline vty 0 4\\n login local\\nline vty 5 15\\n login local\\n!\\n!\\n!\\n!\\n!\\n!\\nend\",\r\n \"ansible_net_filesystems\": [\r\n \"bootflash:\"\r\n ],\r\n \"ansible_net_filesystems_info\": {\r\n \"bootflash:\": {\r\n \"spacefree_kb\": 6801160,\r\n \"spacetotal_kb\": 7712692\r\n }\r\n },\r\n \"ansible_net_gather_subset\": [\r\n \"hardware\",\r\n \"default\",\r\n \"interfaces\",\r\n \"config\"\r\n ],\r\n \"ansible_net_hostname\": \"CSRTEST\",\r\n \"ansible_net_image\": \"bootflash:packages.conf\",\r\n \"ansible_net_interfaces\": {\r\n \"GigabitEthernet1\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [\r\n {\r\n \"address\": \"192.168.102.133\",\r\n \"subnet\": \"24\"\r\n }\r\n ],\r\n \"lineprotocol\": \"up\",\r\n \"macaddress\": \"000c.29a5.1122\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"up\",\r\n \"type\": \"CSR vNIC\"\r\n },\r\n \"GigabitEthernet2\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [],\r\n \"lineprotocol\": \"down\",\r\n \"macaddress\": \"000c.29a5.112c\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"administratively down\",\r\n \"type\": \"CSR vNIC\"\r\n },\r\n \"GigabitEthernet3\": {\r\n \"bandwidth\": 1000000,\r\n \"description\": null,\r\n \"duplex\": \"Full\",\r\n \"ipv4\": [],\r\n \"lineprotocol\": \"down\",\r\n \"macaddress\": \"000c.29a5.1136\",\r\n \"mediatype\": \"Virtual\",\r\n \"mtu\": 1500,\r\n \"operstatus\": \"administratively down\",\r\n \"type\": \"CSR vNIC\"\r\n }\r\n },\r\n \"ansible_net_iostype\": \"IOS-XE\",\r\n \"ansible_net_memfree_mb\": 1863849,\r\n \"ansible_net_memtotal_mb\": 2182523,\r\n \"ansible_net_neighbors\": {},\r\n \"ansible_net_python_version\": \"2.7.5\",\r\n \"ansible_net_serialnum\": \"9U4DE1R3P2Y\",\r\n \"ansible_net_system\": \"ios\",\r\n \"ansible_net_version\": \"16.09.03\"\r\n },\r\n \"changed\": false,\r\n \"invocation\": {\r\n \"module_args\": {\r\n \"auth_pass\": null,\r\n \"authorize\": null,\r\n \"gather_subset\": [\r\n \"all\"\r\n ],\r\n \"host\": null,\r\n \"password\": null,\r\n \"port\": null,\r\n \"provider\": null,\r\n \"ssh_keyfile\": null,\r\n \"timeout\": null,\r\n \"username\": null\r\n }\r\n }\r\n}\r\nMETA: ran handlers\r\nMETA: ran handlers\r\n\r\nPLAY RECAP *************************************************************************************************************************************************************************************************************************************************************\r\nCSRTEST : ok=2 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0\r\n\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/58174", "commit_html_url": null, "file_loc": "{'base_commit': '6a71aef6c5b2f4c26d5f6522cd5b1a85cd78ee6b', 'files': [{'path': 'lib/ansible/plugins/cliconf/ios.py', 'status': 'modified', 'Loc': {\"('Cliconf', 'get_device_info', 199)\": {'mod': [210, 211, 212]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/plugins/cliconf/ios.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "d1cd6ee56d492deef40f6f2f178832a1815730a5", "iss_html_url": "https://github.com/ansible/ansible/issues/37734", "iss_label": "cloud\nazure\nmodule\naffects_2.4\nsupport:certified\nfeature", "title": "Add network interface to Load Balancer Backend pool in azure_rm_networkinterface", "body": "##### ISSUE TYPE\r\n - Feature Idea\r\n\r\n##### COMPONENT NAME\r\nazure_rm_networkinterface\r\n\r\n##### ANSIBLE VERSION\r\n```\r\nansible --version\r\nansible 2.4.3.0\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = [u'/home/dgermain/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]\r\n```\r\n\r\n##### CONFIGURATION\r\n```\r\nansible-config dump --only-changed\r\n#empty return\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### SUMMARY\r\nIn current azure loadbalancer module, you can create Backend pools, but you don't have the possibility to add network interfaces in this Backend pool, neither in *azure_rm_networkinterface* nor in *azure_rm_loadbalancer*.\r\nAs an example, this feature is present in Powershell azure CLI, when handling network interfaces :\r\n```\r\n $nic = Get-AzurermNetworkInterface -name $virtualnetworkcardname\" -resourcegroupname $resourceGroup\r\n $nic.IpConfigurations[0].LoadBalancerBackendAddressPools=$backend\r\n Set-AzureRmNetworkInterface -NetworkInterface $nic\r\n```\r\n\r\n##### STEPS TO REPRODUCE\r\nAs far as I can tell, you don't have this option in the ansible module\r\n\r\n##### EXPECTED RESULTS\r\nHave an option to allow this\r\n\r\n##### ACTUAL RESULTS\r\nNo option to do so", "code": null, "pr_html_url": "github.com/ansible/ansible/pull/38643", "commit_html_url": null, "file_loc": "{'base_commit': 'd1cd6ee56d492deef40f6f2f178832a1815730a5', 'files': [{'path': 'lib/ansible/module_utils/azure_rm_common.py', 'status': 'modified', 'Loc': {\"('AzureRMModuleBase', None, 216)\": {'add': [605]}, '(None, None, None)': {'mod': [131]}}}, {'path': 'lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [66, 73, 153, 198, 210, 239, 351], 'mod': [55, 57, 58, 59, 60, 61, 62, 63, 64, 65, 68, 127, 128, 160, 162, 163, 165, 186, 197, 207, 220, 222, 233, 234, 277, 286]}, \"(None, 'nic_to_dict', 306)\": {'add': [313]}, \"('AzureRMNetworkInterface', 'exec_module', 411)\": {'add': [525], 'mod': [427, 428, 429, 431, 432, 435, 468, 469, 470, 473, 477, 514, 515, 516, 530, 532, 534]}, \"('AzureRMNetworkInterface', None, 356)\": {'add': [600], 'mod': [594]}, \"('AzureRMNetworkInterface', 'construct_ip_configuration_set', 601)\": {'add': [606]}, \"('AzureRMNetworkInterface', '__init__', 358)\": {'mod': [364, 371, 372, 380, 386, 392, 393, 397]}, \"('AzureRMNetworkInterface', 'get_security_group', 594)\": {'mod': [597]}}}, {'path': 'test/integration/targets/azure_rm_networkinterface/tasks/main.yml', 'status': 'modified', 'Loc': {'(None, None, 19)': {'add': [19]}, '(None, None, 124)': {'add': [124]}, '(None, None, 131)': {'add': [131]}, '(None, None, 148)': {'add': [148]}, '(None, None, 164)': {'add': [164]}, '(None, None, 179)': {'add': [179]}, '(None, None, 189)': {'add': [189]}, '(None, None, 36)': {'mod': [36]}, '(None, None, 40)': {'mod': [40, 41]}, '(None, None, 43)': {'mod': [43]}, '(None, None, 48)': {'mod': [48]}, '(None, None, 52)': {'mod': [52, 53]}, '(None, None, 55)': {'mod': [55]}, '(None, None, 78)': {'mod': [78]}, '(None, None, 90)': {'mod': [90]}, '(None, None, 113)': {'mod': [113]}, '(None, None, 137)': {'mod': [137]}, '(None, None, 159)': {'mod': [159]}, '(None, None, 176)': {'mod': [176]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\nConfig\nTest"}, "loctype": {"code": ["lib/ansible/modules/cloud/azure/azure_rm_networkinterface.py", "lib/ansible/module_utils/azure_rm_common.py"], "doc": [], "test": [], "config": ["test/integration/targets/azure_rm_networkinterface/tasks/main.yml"], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "6f8c1da0c805f334b8598fd2556f7ed92dc9348e", "iss_html_url": "https://github.com/ansible/ansible/issues/79277", "iss_label": "bug\ntraceback\naffects_2.13", "title": "ansible-test fails to report the proper error when validating ansible-doc", "body": "### Summary\n\nThe utility ansible-test sanity is fantastic and does its job. Unfortunately, when validating the ansible-doc, if the YAML is malformed, you'll get a parsing error instead of the actual YAML error.\n\n### Issue Type\n\nBug Report\n\n### Component Name\n\nansible-test\n\n### Ansible Version\n\n```console\n$ ansible --version\r\nansible [core 2.13.6rc1.post0] (stable-2.13 33852737fd) last updated 2022/10/31 21:51:24 (GMT +200)\r\n config file = None\r\n configured module search path = ['/home/warkdev/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /home/warkdev/ansible/lib/ansible\r\n ansible collection location = /home/warkdev/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /home/warkdev/ansible/bin/ansible\r\n python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]\r\n jinja version = 3.1.2\r\n libyaml = False\n```\n\n\n### Configuration\n\n```console\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\n```\n\n\n### OS / Environment\n\nDebian 12\n\n### Steps to Reproduce\n\n* Generate an ansible module that you want to validate and introduce invalid YAML syntax in the ansible-doc\r\n* Run ansible-test sanity against that module\r\n* Verify that the error is happening\r\n\r\nI've tracked down the issue till this code: https://github.com/ansible/ansible/blob/stable-2.13/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py#L157\n\n### Expected Results\n\nERROR: Found 2 yamllint issue(s) which need to be resolved:\r\nERROR: plugins/modules/axway_cft_about_info.py:36:15: error: RETURN: syntax error: mapping values are not allowed here (syntax)\r\nERROR: plugins/modules/axway_cft_about_info.py:36:15: unparsable-with-libyaml: None - mapping values are not allowed in this context\n\n### Actual Results\n\n```console\nTraceback (most recent call last):\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py\", line 153, in parse_yaml\r\n data = yaml_load(value, Loader=loader)\r\n File \"/root/.ansible/test/venv/sanity.validate-modules/3.10/487215fd/lib/python3.10/site-packages/yaml/__init__.py\", line 81, in load\r\n return loader.get_single_data()\r\n File \"/root/.ansible/test/venv/sanity.validate-modules/3.10/487215fd/lib/python3.10/site-packages/yaml/constructor.py\", line 49, in get_single_data\r\n node = self.get_single_node()\r\n File \"yaml/_yaml.pyx\", line 673, in yaml._yaml.CParser.get_single_node\r\n File \"yaml/_yaml.pyx\", line 687, in yaml._yaml.CParser._compose_document\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 845, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 845, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 731, in yaml._yaml.CParser._compose_node\r\n File \"yaml/_yaml.pyx\", line 847, in yaml._yaml.CParser._compose_mapping_node\r\n File \"yaml/_yaml.pyx\", line 860, in yaml._yaml.CParser._parse_next_event\r\nyaml.scanner.ScannerError: mapping values are not allowed in this context\r\n in \"\", line 9, column 15\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate.py\", line 6, in \r\n main()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2475, in main\r\n run()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2363, in run\r\n mv1.validate()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 2156, in validate\r\n doc_info, docs = self._validate_docs()\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/main.py\", line 1080, in _validate_docs\r\n data, errors, traces = parse_yaml(doc_info['RETURN']['value'],\r\n File \"/root/ansible/test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py\", line 157, in parse_yaml\r\n e.problem_mark.line += lineno - 1\r\nAttributeError: attribute 'line' of 'yaml._yaml.Mark' objects is not writable\n```\n\n\n### Code of Conduct\n\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/79682", "commit_html_url": null, "file_loc": "{'base_commit': '6f8c1da0c805f334b8598fd2556f7ed92dc9348e', 'files': [{'path': 'test/integration/targets/ansible-test-sanity-validate-modules/runme.sh', 'status': 'modified', 'Loc': {'(None, None, 7)': {'mod': [7]}}}, {'path': 'test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py', 'status': 'modified', 'Loc': {\"(None, 'parse_yaml', 137)\": {'mod': [157, 158, 161]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["test/lib/ansible_test/_util/controller/sanity/validate-modules/validate_modules/utils.py"], "doc": [], "test": [], "config": [], "asset": ["test/integration/targets/ansible-test-sanity-validate-modules/runme.sh"]}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "d97080174e9bbebd27a967368934ef91d1f28f64", "iss_html_url": "https://github.com/ansible/ansible/issues/32070", "iss_label": "networking\naffects_2.4\nsupport:core\nnxos\nbug\ncisco", "title": "Occasional failures with NXOS modules", "body": "##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\nnxos modules\r\n\r\n##### ANSIBLE VERSION\r\nansible 2.4.0.0\r\n config file = /project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg\r\n configured module search path = [u'/etc/ansible/roles/plugins/library', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/modules']\r\n ansible python module location = /usr/lib/python2.7/dist-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609]\r\n\r\n\r\n##### CONFIGURATION\r\nDEFAULT_ACTION_PLUGIN_PATH(env: ANSIBLE_ACTION_PLUGINS) = [u'/etc/ansible/roles/plugins/action', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/actions']\r\nDEFAULT_CALLBACK_PLUGIN_PATH(env: ANSIBLE_CALLBACK_PLUGINS) = [u'/etc/ansible/roles/plugins/callback', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/callbacks']\r\nDEFAULT_CALLBACK_WHITELIST(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = ['profile_tasks']\r\nDEFAULT_FILTER_PLUGIN_PATH(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/plugins/filter']\r\nDEFAULT_FORCE_HANDLERS(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = True\r\nDEFAULT_HOST_LIST(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/inventory']\r\nDEFAULT_MODULE_PATH(env: ANSIBLE_LIBRARY) = [u'/etc/ansible/roles/plugins/library', u'/usr/local/lib/python2.7/dist-packages/ara/plugins/modules']\r\nDEFAULT_ROLES_PATH(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = [u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles-dev', u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles', u'/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/roles-base-shell']\r\nHOST_KEY_CHECKING(/project-mcuk/ap/ipp/jrosser/cloud/ansible-mist/ansible.cfg) = False\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 16.04\r\nNXOS: version 7.0(3)I7(1)\r\n\r\n##### SUMMARY\r\nI observe non deterministic failures with the nxos modules when configuring 9200 series switches, in this specific case a 92160.\r\n\r\n##### STEPS TO REPRODUCE\r\nSadly this is difficult to reproduce. I have a playbook which configures a couple of dozen ports on several switches, each taking a dozen or more tasks. This is a sufficient number of tasks to occasionally trigger a failure of a task. Running the playbook again most likely will result in no errors.\r\n\r\nPlaybook https://gist.github.com/jrosser/b4d88748f5b1323828a8f2f266596ead\r\n\r\n##### EXPECTED RESULTS\r\nAll tasks to run without error. Running with -vvvv gives no insight into the communication with the switch so doesn't provide any useful debug.\r\n\r\n##### ACTUAL RESULTS\r\nVery occasionally one or more tasks will fail.\r\n```\r\nTASK [Ensure all layer 2 interfaces are up] ***********************************************************************************************************\r\nTuesday 24 October 2017 10:54:15 +0000 (0:00:21.378) 0:01:00.450 ******* \r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/1', u'description': u'to infra0-1-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/1', u'description': u'to infra0-1-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/2', u'description': u'to infra0-2-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/2', u'description': u'to infra0-2-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/3', u'description': u'to infra0-3-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/3', u'description': u'to infra0-3-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/4', u'description': u'to infra0-4-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/4', u'description': u'to infra0-4-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/5', u'description': u'to infra0-5-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/5', u'description': u'to infra0-5-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/6', u'description': u'to infra0-6-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/6', u'description': u'to infra0-6-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/7', u'description': u'to infra0-7-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/7', u'description': u'to infra0-7-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/8', u'description': u'to infra0-8-b505-10'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/8', u'description': u'to infra0-8-b505-10'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/9', u'description': u'to infra0-1-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/9', u'description': u'to infra0-1-b505-9'})\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: TypeError: string indices must be integers, not str\r\nfailed: [fbs0-b505-10] (item={u'interface': u'Ethernet1/10', u'description': u'to infra0-2-b505-9'}) => {\"changed\": false, \"failed\": true, \"item\": {\"description\": \"to infra0-2-b505-9\", \"interface\": \"Ethernet1/10\"}, \"module_stderr\": \"Traceback (most recent call last):\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 710, in \\n main()\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 701, in main\\n normalized_interface)\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 534, in smart_existing\\n existing = get_interface(normalized_interface, module)\\n File \\\"/tmp/ansible_SPjZ0l/ansible_module_nxos_interface.py\\\", line 281, in get_interface\\n interface_table = body['TABLE_interface']['ROW_interface']\\nTypeError: string indices must be integers, not str\\n\", \"module_stdout\": \"\", \"msg\": \"MODULE FAILURE\", \"rc\": 0}\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/10', u'description': u'to infra0-2-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/11', u'description': u'to infra0-3-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/11', u'description': u'to infra0-3-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/12', u'description': u'to infra0-4-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/12', u'description': u'to infra0-4-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/13', u'description': u'to infra0-5-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/13', u'description': u'to infra0-5-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/14', u'description': u'to infra0-6-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/14', u'description': u'to infra0-6-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/15', u'description': u'to infra0-7-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/15', u'description': u'to infra0-7-b505-9'})\r\nchanged: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/16', u'description': u'to infra0-8-b505-9'})\r\nchanged: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/16', u'description': u'to infra0-8-b505-9'})\r\nok: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/47', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/47', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-10] => (item={u'interface': u'Ethernet1/48', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Ethernet1/48', u'stp_port_type': u'network', u'description': u'vpc peer link'})\r\n\r\n\r\nTASK [Ensure vrrpv3 is applied for vlans that need it] ************************************************************************************************\r\nTuesday 24 October 2017 11:01:48 +0000 (0:00:11.191) 0:08:33.606 ******* \r\nskipping: [fbs0-b505-9] => (item={u'vrf': u'default', u'vlan_id': 999}) \r\nok: [fbs0-b505-9] => (item={u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 23, u'description': u'storage-clients', u'address': u'10.23.128.5'}, u'vrf': u'STORAGE', u'address': u'10.23.128.1/24', u'interface': u'Vlan1923', u'extra_lines': [u'mtu 9216'], u'vlan_id': 1923})\r\nok: [fbs0-b505-9] => (item={u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 21, u'description': u'storage-services', u'address': u'10.21.128.5'}, u'vrf': u'STORAGE', u'address': u'10.21.128.1/24', u'interface': u'Vlan1921', u'extra_lines': [u'mtu 9216'], u'vlan_id': 1921})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1911', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 11, u'description': u'osmgmt', u'address': u'10.11.128.5'}, u'vrf': u'OSMGMT', u'vlan_id': 1911, u'address': u'10.11.128.1/24'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1931', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 31, u'description': u'metal', u'address': u'10.31.128.5'}, u'vrf': u'METAL', u'vlan_id': 1931, u'address': u'10.31.128.1/24'})\r\nok: [fbs0-b505-9] => (item={u'interface': u'Vlan1932', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 32, u'description': u'metal', u'address': u'10.32.128.5'}, u'vrf': u'METAL', u'vlan_id': 1932, u'address': u'10.32.128.1/24'})\r\nfailed: [fbs0-b505-9] (item={u'interface': u'Vlan1941', u'vrrpv3': {u'priority': u'102', u'address_family': u'ipv4', u'group_id': 41, u'description': u'tunnels', u'address': u'10.41.128.5'}, u'vrf': u'TUNNEL', u'vlan_id': 1941, u'address': u'10.41.128.1/24'}) => {\"changed\": false, \"failed\": true, \"item\": {\"address\": \"10.41.128.1/24\", \"interface\": \"Vlan1941\", \"vlan_id\": 1941, \"vrf\": \"TUNNEL\", \"vrrpv3\": {\"address\": \"10.41.128.5\", \"address_family\": \"ipv4\", \"description\": \"tunnels\", \"group_id\": 41, \"priority\": \"102\"}}, \"msg\": \"interface Vlan1941\\r\\r\\n ^\\r\\n% Invalid command at '^' marker.\\r\\n\\rfbs0-b505-9# \"}\r\n```", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/32114", "commit_html_url": null, "file_loc": "{'base_commit': 'd97080174e9bbebd27a967368934ef91d1f28f64', 'files': [{'path': 'lib/ansible/module_utils/nxos.py', 'status': 'modified', 'Loc': {\"('Cli', 'run_commands', 139)\": {'add': [171]}, '(None, None, None)': {'mod': [37]}}}, {'path': 'lib/ansible/modules/network/nxos/nxos_interface.py', 'status': 'modified', 'Loc': {\"(None, 'get_interface', 238)\": {'mod': [278, 280, 281, 282, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 312, 313, 315, 316, 317, 318, 319, 320, 321, 322, 324, 325, 326, 327, 329, 330, 331, 332, 333, 334, 335, 336, 338, 339, 340, 341, 342, 343]}, \"(None, 'get_interfaces_dict', 361)\": {'mod': [372]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/network/nxos/nxos_interface.py", "lib/ansible/module_utils/nxos.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "cc7a5228b02344658dac69c38ccb7d6580d2b4c6", "iss_html_url": "https://github.com/ansible/ansible/issues/34012", "iss_label": "module\naffects_2.4\nnet_tools\nsupport:community\nbug", "title": "nmcli module fails with self.dns4=' '.join(module.params['dns4']) TypeError", "body": "\r\n##### ISSUE TYPE\r\n\r\n - Bug Report\r\n\r\n\r\n##### COMPONENT NAME\r\n\r\n`nmcli`\r\n\r\n##### ANSIBLE VERSION\r\n\r\n```\r\nansible 2.4.1.0\r\n config file = /Users/dlbewley/src/ansible/playbook-openshift/ansible.cfg\r\n configured module search path = [u'/Users/dlbewley/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/Cellar/ansible/2.4.1.0/libexec/lib/python2.7/site-packages/ansible\r\n executable location = /usr/local/bin/ansible\r\n python version = 2.7.14 (default, Sep 25 2017, 09:53:22) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.37)]\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n##### OS / ENVIRONMENT\r\n\r\n- Manager: OS X\r\n- Managed: Red Hat Enterprise Linux Server release 7.4 (Maipo)\r\n\r\n##### SUMMARY\r\n\r\nPlaybook fails when trying to join `None` value for `dns4` param [here](https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/net_tools/nmcli.py#L559)\r\n\r\nI do not see a requirement to include dns servers, and expect to use DHCP.\r\n\r\n##### STEPS TO REPRODUCE\r\n\r\nHost with links on eno1, eno2. Int eno1 is def gw\r\n\r\n\r\n```yaml\r\n---\r\n- hosts: bonded\r\n\r\n# Dec 18 18:08:43 ose-prod-node-07 ansible-nmcli[31031]: Invoked with conn_name=cluster ingress=None slavepriority=32 vlandev=None forwarddelay=15 egress=None ageingtime=300 mtu=None hellotime=2 maxage=20 vlanid=None priority=128 gw4=None state=present gw6=None master=None stp=True ifname=None type=bond miimon=None arp_ip_target=None downdelay=None mac=None ip6=None ip4=None autoconnect=None dns6=None dns4=None arp_interval=None flags=None mode=802.3ad updelay=None\r\n\r\n vars:\r\n nmcli_bond:\r\n - conn_name: cluster\r\n mode: 802.3ad\r\n mtu: 9000\r\n\r\n nmcli_bond_slave:\r\n - conn_name: eno1\r\n master: cluster\r\n - conn_name: eno2\r\n master: cluster\r\n\r\n tasks:\r\n - name: create bond\r\n nmcli:\r\n type: bond\r\n conn_name: '{{ item.conn_name }}'\r\n mode: '{{ item.mode }}'\r\n state: present\r\n with_items:\r\n - '{{ nmcli_bond }}'\r\n\r\n - name: add interfaces to bond\r\n nmcli:\r\n type: bond-slave\r\n conn_name: '{{ item.conn_name }}'\r\n ifname: '{{ item.ifname }}'\r\n master: '{{ item.master }}'\r\n state: present\r\n with_items:\r\n - '{{ nmcli_bond_slave }}'\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\n\r\n\r\nFirst test, but expect playbook to run without error.\r\n\r\n##### ACTUAL RESULTS\r\n\r\n\r\n\r\n```\r\nfailed: [ose-prod-node-07.example.com] (item={u'conn_name': u'cluster', u'mode': u'802.3ad', u'mtu': 9000}) => {\r\n \"changed\": false,\r\n \"failed\": true,\r\n \"item\": {\r\n \"conn_name\": \"cluster\",\r\n \"mode\": \"802.3ad\",\r\n \"mtu\": 9000\r\n },\r\n \"module_stderr\": \"OpenSSH_7.4p1, LibreSSL 2.5.0\\r\\ndebug1: Reading configuration data /Users/dlbewley/.ssh/config\\r\\ndebug1: /Users/dlbewley/.ssh/config line 3: Applying options for *\\r\\ndebug1: Reading configuration data /etc/ssh/ssh_config\\r\\ndebug1: /etc/ssh/ssh_config line 51: Applying options for *\\r\\ndebug1: /etc/ssh/ssh_config line 56: Applying options for *\\r\\ndebug1: auto-mux: Trying existing master\\r\\ndebug2: fd 3 setting O_NONBLOCK\\r\\ndebug2: mux_client_hello_exchange: master version 4\\r\\ndebug3: mux_client_forwards: request forwardings: 0 local, 0 remote\\r\\ndebug3: mux_client_request_session: entering\\r\\ndebug3: mux_client_request_alive: entering\\r\\ndebug3: mux_client_request_alive: done pid = 10219\\r\\ndebug3: mux_client_request_session: session request sent\\r\\ndebug1: mux_client_request_session: master session id: 2\\r\\ndebug3: mux_client_read_packet: read header failed: Broken pipe\\r\\ndebug2: Received exit status from master 1\\r\\nShared connection to ose-prod-node-07.example.com closed.\\r\\n\",\r\n \"module_stdout\": \"/tmp/ansible_gKn2an/ansible_module_nmcli.py:493: PyGIWarning: NetworkManager was imported without specifying a version first. Use gi.require_version('NetworkManager', '1.0') before import to ensure that the right version gets loaded.\\r\\n from gi.repository import NetworkManager, NMClient\\r\\n/tmp/ansible_gKn2an/ansible_module_nmcli.py:493: PyGIWarning: NMClient was imported without specifying a version first. Use gi.require_version('NMClient', '1.0') before import to ensure that the right version gets loaded.\\r\\n from gi.repository import NetworkManager, NMClient\\r\\nTraceback (most recent call last):\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 1190, in \\r\\n main()\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 1134, in main\\r\\n nmcli=Nmcli(module)\\r\\n File \\\"/tmp/ansible_gKn2an/ansible_module_nmcli.py\\\", line 559, in __init__\\r\\n self.dns4=' '.join(module.params['dns4'])\\r\\nTypeError\\r\\n\",\r\n \"msg\": \"MODULE FAILURE\",\r\n \"rc\": 1\r\n}\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/ansible/ansible/pull/30757", "commit_html_url": null, "file_loc": "{'base_commit': 'cc7a5228b02344658dac69c38ccb7d6580d2b4c6', 'files': [{'path': 'lib/ansible/modules/net_tools/nmcli.py', 'status': 'modified', 'Loc': {\"('Nmcli', '__init__', 549)\": {'mod': [559]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/modules/net_tools/nmcli.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "5f7d39fede4de8af98472bd009c63c3a86568e2d", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2840", "iss_label": "bug", "title": "wandb: Network error (ReadTimeout), entering retry loop. See wandb\\debug-internal.log for full traceback.", "body": "\r\n - **Current repo**: yolov5-5.0 release version\r\n - **Common dataset**: VisDrone.yaml\r\n - **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments\r\n\r\n\r\n## \ud83d\udc1b Bug\r\nI try to use your rep to train yolov4's NET because yolov4(https://github.com/WongKinYiu/PyTorch_YOLOv4)'s code is outdate and do not maintain, it has many bugs.\r\n when I train my own yolov4-tiny.yaml, it comes this bug, I think this bug is because my network can not connect to wandb's server? before today, I can train normally, and a few minute ago, I try many times to `python train.py `,but I still can not begin my train code.\r\n\r\n## To Reproduce (REQUIRED)\r\n \r\n`python train.py `\r\n\r\nOutput:\r\n```\r\nYOLOv5 2021-4-15 torch 1.7.1 CUDA:0 (GRID V100D-32Q, 32638.0MB)\r\n\r\nNamespace(adam=False, artifact_alias='latest', batch_size=64, bbox_interval=-1, bucket='', cache_images=False, cfg='models/yolov4-tiny.yaml', data='datai/Visdrone.yaml', device='', entity=None, epochs=300, evolve=False, exist_ok=False, global_rank=-1, hyp='data/hyp.scratch.yaml', image_weights=False, img_size=[640, 640], label_smoothing=0.0, linear_lr=False, local_rank=-1, multi_scale=False, name='exp', noautoanchor=False, nosave=False, notest=False, project='runs/train', quad=False, rect=False, resume=False, save_dir='runs\\\\train\\\\exp8', save_period=-1, single_cls=False, sync_bn=False, total_batch_size=64, upload_dataset=False, weights='', workers=8, world_size=1)\r\ntensorboard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/\r\nhyperparameters: lr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0\r\nwandb: Currently logged in as: zigar (use `wandb login --relogin` to force relogin)\r\nwandb: Network error (ReadTimeout), entering retry loop. See wandb\\debug-internal.log for full traceback.\r\n```\r\n\r\n\r\n## Expected behavior\r\nA clear and concise description of what you expected to happen.\r\n\r\n\r\n## Environment\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n - OS: [e.g. WIndows 10]\r\n - GPU [e.g. GRID V100D-32Q, 32638.0MB]\r\n\r\n\r\n## Additional context\r\nAdd any other context about the problem here.\r\n", "code": null, "pr_html_url": "https://github.com/ultralytics/yolov5/pull/2882", "commit_html_url": null, "file_loc": "{'base_commit': '5f7d39fede4de8af98472bd009c63c3a86568e2d', 'files': [{'path': 'data/argoverse_hd.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'data/coco.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'data/coco128.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'data/scripts/get_argoverse_hd.sh', 'status': 'modified', 'Loc': {'(None, None, 5)': {'mod': [5]}}}, {'path': 'data/scripts/get_coco.sh', 'status': 'modified', 'Loc': {'(None, None, 5)': {'mod': [5]}}}, {'path': 'data/scripts/get_voc.sh', 'status': 'modified', 'Loc': {'(None, None, 41)': {'add': [41]}, '(None, None, 77)': {'add': [77]}, '(None, None, 120)': {'add': [120]}, '(None, None, 5)': {'mod': [5]}, '(None, None, 32)': {'mod': [32, 33]}, '(None, None, 35)': {'mod': [35, 36]}, '(None, None, 38)': {'mod': [38]}, '(None, None, 40)': {'mod': [40]}, '(None, None, 43)': {'mod': [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]}, '(None, None, 57)': {'mod': [57, 58, 59]}, '(None, None, 68)': {'mod': [68]}, '(None, None, 72)': {'mod': [72, 73]}, '(None, None, 76)': {'mod': [76]}, '(None, None, 79)': {'mod': [79, 80, 81, 82]}, '(None, None, 84)': {'mod': [84]}, '(None, None, 93)': {'mod': [93]}, '(None, None, 95)': {'mod': [95]}, '(None, None, 97)': {'mod': [97, 98, 99, 100, 102, 103, 104]}, '(None, None, 106)': {'mod': [106]}, '(None, None, 108)': {'mod': [108, 109, 111, 112, 113, 114, 116, 117, 118, 119]}, '(None, None, 123)': {'mod': [123, 124, 126, 127, 128, 129, 131, 132, 133, 134]}}}, {'path': 'data/voc.yaml', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'utils/general.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 175]}, \"(None, 'check_dataset', 156)\": {'add': [166], 'mod': [164, 168, 169, 171]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/general.py"], "doc": [], "test": [], "config": ["data/argoverse_hd.yaml", "data/voc.yaml", "data/coco.yaml", "data/coco128.yaml"], "asset": ["data/scripts/get_argoverse_hd.sh", "data/scripts/get_voc.sh", "data/scripts/get_coco.sh"]}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "cbd55da5d24becbe3b94afaaa4cdd1187a512c3f", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2824", "iss_label": "bug", "title": " Sizes of tensors must match ", "body": "Multi Threaded Inference is not working with Yolo5. It throws the following error,\r\n\r\n```\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 113, in forward\r\n yi = self.forward_once(xi)[0] # forward\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 139, in forward_once\r\n x = m(x) # run\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/site-packages/torch/nn/modules/module.py\", line 889, in _call_impl\r\n result = self.forward(*input, **kwargs)\r\n File \"/home/zumbala/yolov5/models/yolo.py\", line 54, in forward\r\n y[..., 0:2] = (y[..., 0:2] * 2. - 0.5 + self.grid[i]) * self.stride[i] # xy\r\nRuntimeError: The size of tensor a (68) must match the size of tensor b (56) at non-singleton dimension 3\r\nException in thread Thread-112:\r\nTraceback (most recent call last):\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/threading.py\", line 932, in _bootstrap_inner\r\n self.run()\r\n File \"/home/zumbala/anaconda3/envs/environment/lib/python3.8/threading.py\", line 870, in run\r\n self._target(*self._args, **self._kwargs)\r\n```\r\n\r\nI saw the similar bug in other issue and I used the latest version of this repo. Still the problem persists. How can I fix it?\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ultralytics/yolov5/commit/cbd55da5d24becbe3b94afaaa4cdd1187a512c3f", "file_loc": "{'base_commit': 'cbd55da5d24becbe3b94afaaa4cdd1187a512c3f', 'files': [{'path': 'models/yolo.py', 'status': 'modified', 'Loc': {\"('Detect', 'forward', 38)\": {'mod': [52]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["models/yolo.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "d9b64c27c24db2001535bb480959aca015159510", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/119", "iss_label": "question\nStale", "title": "yolov5m\u6a21\u578b\uff0c\u753142M\u589e\u5927\u523084M\uff0c\u662f\u505a\u4e86\u4ec0\u4e48\u4fee\u6539\u4e48\uff1f", "body": "6.16\u6211\u505a\u8bad\u7ec3\u7684\u65f6\u5019\uff08yolov5m\uff09\uff0c\u8bad\u7ec3\u51fa\u6765\u7684\u6a21\u578b\u5927\u5c0f\u662f42M\r\n\r\n\u4f46\u662f\u4eca\u5929\uff086.18\uff09\u6211\u7528\u6700\u65b0\u4ee3\u7801\u8bad\u7ec3\u7684\u65f6\u5019\uff0c\u6a21\u578b\u5927\u5c0f\u662f84M\r\n\r\n\u8bf7\u95ee\u662f\u505a\u4e86\u4ec0\u4e48\u4fee\u6539\u4e48\uff1f", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ultralytics/yolov5/commit/d9b64c27c24db2001535bb480959aca015159510", "file_loc": "{'base_commit': 'd9b64c27c24db2001535bb480959aca015159510', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {\"(None, 'train', 60)\": {'mod': [335]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "bfd51f62f8e0a114cb94c269e83ff135e31d8bdb", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/187", "iss_label": "bug", "title": "can't test with my finetune weights", "body": "i train a model in my custom data, can get the weights (**last.pt** and **best.pt**)\r\ni run:\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --weights weights/last.pt --device 4`\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --weights weights/best.pt --device 4`\r\nboth raise the error:\r\n**Traceback (most recent call last):\r\n File \"test.py\", line 277, in \r\n opt.verbose)\r\n File \"test.py\", line 86, in test\r\n names = model.names if hasattr(model, 'names') else model.module.names\r\n File \"/home/anaconda3/envs/yolov5/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 594, in __getattr__\r\n type(self).__name__, name))\r\nAttributeError: 'Model' object has no attribute 'module'**\r\n\r\nHowever, i can run with the default weight **yolov5s.pt**\r\n`python test.py --img 640 --batch 16 --data ./data/patrol.yaml --device 4`\r\n\r\npytorch = 1.5", "code": null, "pr_html_url": "https://github.com/ultralytics/yolov5/pull/245", "commit_html_url": null, "file_loc": "{'base_commit': 'bfd51f62f8e0a114cb94c269e83ff135e31d8bdb', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {\"(None, 'train', 62)\": {'add': [135, 136, 174], 'mod': [82, 291]}, '(None, None, None)': {'mod': [375]}}}, {'path': 'utils/torch_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [56]}, \"(None, 'model_info', 101)\": {'mod': [114, 115]}, \"('ModelEMA', 'update', 184)\": {'mod': [188]}, \"('ModelEMA', 'update_attr', 198)\": {'mod': [199, 200, 201, 202]}}}, {'path': 'utils/utils.py', 'status': 'modified', 'Loc': {\"(None, 'check_img_size', 48)\": {'mod': [50]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py", "utils/utils.py", "utils/torch_utils.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/227", "iss_label": "", "title": "Short text samples", "body": "It would be awesome to be able to use this to help train a hot word detector. In addition to recording myself saying the hotword, I could create an even larger dataset by adding outputs of this model that used my voice as the reference.\r\n\r\nThe problem with that, however, is that this model seems to only work well on sentences of medium length (+- 20 words according to demo_cli.py). Is there anything I can do to make short text samples (e.g. 2 words) sound better?", "code": null, "pr_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472", "commit_html_url": null, "file_loc": "{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 18)': {'mod': [18]}, '(None, None, 23)': {'mod': [23, 24]}, '(None, None, 65)': {'mod': [65, 66, 68, 70]}}}, {'path': 'demo_cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13, 43, 162], 'mod': [24, 25, 26, 30, 31, 32, 70, 76]}}}, {'path': 'demo_toolbox.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 32], 'mod': [23, 24, 25]}}}, {'path': 'encoder/audio.py', 'status': 'modified', 'Loc': {\"(None, 'preprocess_wav', 19)\": {'mod': [20, 43, 44]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 16)': {'add': [16]}, '(None, None, 1)': {'mod': [1]}}}, {'path': 'requirements_gpu.txt', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/LICENSE.txt', 'status': 'modified', 'Loc': {'(None, None, 3)': {'add': [3]}, '(None, None, 4)': {'add': [4]}}}, {'path': 'synthesizer/audio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}}}, {'path': 'synthesizer/feeder.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/hparams.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [348], 'mod': [1, 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 121, 122, 123, 124, 125, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 149, 150, 151, 152, 153, 154, 155, 157, 158, 159, 160, 161, 162, 164, 165, 166, 167, 168, 169, 170, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 190, 191, 192, 193, 194, 196, 197, 198, 199, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 231, 232, 233, 234, 235, 237, 238, 239, 240, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 342, 343, 344, 345, 347]}, \"(None, 'hparams_debug_string', 350)\": {'mod': [351, 352, 353]}}}, {'path': 'synthesizer/inference.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6], 'mod': [1, 2, 3, 4, 5, 9, 11]}, \"('Synthesizer', '__init__', 19)\": {'add': [33], 'mod': [21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59]}, \"('Synthesizer', 'griffin_lim', 149)\": {'add': [154]}, \"('Synthesizer', None, 15)\": {'mod': [19, 106, 107, 108, 109, 110, 111, 113, 114, 116, 117, 118, 119, 121]}, \"('Synthesizer', 'is_loaded', 61)\": {'mod': [63]}, \"('Synthesizer', 'load', 67)\": {'mod': [69, 70, 71, 72, 73, 74, 75]}, \"('Synthesizer', 'synthesize_spectrograms', 77)\": {'mod': [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 104]}}}, {'path': 'synthesizer/infolog.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/__init__.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/architecture_wrappers.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/attention.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/custom_decoder.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/helpers.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/modules.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/tacotron.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11], 'mod': [1, 2, 3, 4, 5, 6, 7, 8, 9]}, \"(None, 'split_func', 14)\": {'mod': [14, 15, 16, 17, 18, 19, 20, 21, 24, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 79, 81, 82, 84, 86, 87, 88, 89, 90, 91, 93, 94, 95, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 134, 135, 136, 137, 139, 140, 141, 142, 143, 145, 147, 148, 151, 153, 154, 155, 156, 157, 158, 160, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 198, 199, 200, 201, 202, 203, 205, 206, 207, 209, 210, 212, 213, 214, 215, 216, 217, 218, 220, 221, 222, 223, 225, 226, 228, 229, 230, 232, 233, 234, 235, 237, 238, 240, 241, 242, 243, 244, 245, 246, 247, 249, 250, 252, 253, 254, 256, 257, 259, 260, 261, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 277, 278, 279, 280, 281, 282, 283, 284, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 312, 313, 314, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 330, 331, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 369, 370, 371, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 385, 386, 387, 388, 389, 390, 391, 392, 394, 395, 396, 397, 398, 399, 400, 402, 403, 404, 405, 406, 407, 409, 410, 412, 413, 414, 415, 416, 417, 418, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 435, 436, 437, 439, 441, 442, 443, 444, 445, 446, 447, 448, 449, 451, 452, 454, 455, 456, 457, 458, 459, 460, 461, 462, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 479, 480, 481, 483, 484, 485, 486, 487, 488, 489, 491, 492, 493, 494, 495, 497, 498, 499, 501, 502, 504, 505, 507, 508, 509, 510, 512, 513, 514, 515, 516, 517, 518, 520, 521]}}}, {'path': 'synthesizer/preprocess.py', 'status': 'modified', 'Loc': {\"(None, 'process_utterance', 185)\": {'add': [204]}}}, {'path': 'synthesizer/synthesize.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82], 'mod': [1, 3, 4, 6, 7]}, \"(None, 'run_eval', 10)\": {'mod': [10, 11, 12, 14, 15, 16, 17, 18, 20, 21, 23, 24, 25, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37]}, \"(None, 'run_synthesis', 39)\": {'mod': [40, 41, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 62, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 77, 78, 80, 81]}}}, {'path': 'synthesizer/tacotron2.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 79, 83], 'mod': [3, 4, 5, 6, 7, 9, 10, 12, 14, 16, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 31, 32, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78]}, \"(None, 'model_train_mode', 85)\": {'mod': [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 133, 134, 135, 136, 138, 139, 141, 142, 143, 144, 146, 147, 148, 149]}, \"(None, 'train', 110)\": {'mod': [151, 152, 153, 154, 155, 156, 157, 159, 161, 167, 169, 171, 172, 173, 174, 176, 177, 178, 179, 181, 183, 184, 185, 186, 187, 189, 190, 191, 192, 194, 195, 196, 198, 199, 201, 202, 204, 205, 207, 208, 210, 212, 213, 214, 215, 216, 218, 219, 220, 222, 223, 224, 226, 227, 228, 230, 231, 232, 233, 234, 235, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 327, 328, 329, 330, 332, 333, 334, 335, 336, 337, 338, 339, 341, 342, 343, 344, 346, 347, 348, 349, 350, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 374, 375, 376, 377, 378, 379, 381, 382, 383, 385, 386, 387, 388, 391, 392]}}}, {'path': 'synthesizer/utils/__init__.py', 'status': 'modified', 'Loc': {\"('ValueWindow', None, 1)\": {'add': [0]}}}, {'path': 'synthesizer_train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 4, 6, 9, 10, 11, 12, 13, 14, 15, 16, 21, 22, 23, 24, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 53, 55]}}}, {'path': 'toolbox/__init__.py', 'status': 'modified', 'Loc': {\"('Toolbox', 'init_encoder', 325)\": {'add': [333]}, \"('Toolbox', None, 42)\": {'mod': [43]}, \"('Toolbox', '__init__', 43)\": {'mod': [54]}, \"('Toolbox', 'synthesize', 207)\": {'mod': [211, 212, 213, 214, 215, 216, 217, 221, 224, 228]}, \"('Toolbox', 'vocode', 237)\": {'mod': [243]}}}, {'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [41]}, \"('UI', None, 53)\": {'mod': [331]}, \"('UI', 'populate_models', 338)\": {'mod': [347, 348, 349, 350, 351, 352, 353]}}}, {'path': 'vocoder_preprocess.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32, 40], 'mod': [20]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/models/modules.py", "synthesizer/models/tacotron.py", "synthesizer/train.py", "synthesizer/models/attention.py", "synthesizer_train.py", "demo_cli.py", "toolbox/__init__.py", "demo_toolbox.py", "synthesizer/models/architecture_wrappers.py", "synthesizer/audio.py", "synthesizer/preprocess.py", "synthesizer/tacotron2.py", "synthesizer/hparams.py", "synthesizer/utils/__init__.py", "synthesizer/synthesize.py", "toolbox/ui.py", "encoder/audio.py", "synthesizer/feeder.py", "synthesizer/models/helpers.py", "synthesizer/models/__init__.py", "synthesizer/inference.py", "vocoder_preprocess.py", "synthesizer/models/custom_decoder.py", "synthesizer/infolog.py"], "doc": ["synthesizer/LICENSE.txt", "README.md"], "test": [], "config": ["requirements_gpu.txt", "requirements.txt"], "asset": []}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "f108782e30369dedfc66f22d21c2b72c77941de7", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5050", "iss_label": "bug", "title": "[Bug]: img2img sampler is not changing", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nI'm trying to choose another sampler, but it is not working.\r\n\r\nI tried checking the p value, and found sampler_name = None\r\nThere seems to be a code missing to assign the variable sampler_name in the img2img\r\n\r\ntxt2img seems working fine, though.\n\n### Steps to reproduce the problem\n\nChange the sampler and see the results. They are all the same.\n\n### What should have happened?\n\nDifferent samplers should produce different results.\n\n### Commit where the problem happens\n\n828438b\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Command Line Arguments\n\n_No response_\n\n### Additional information, context and logs\n\n_No response_", "code": null, "pr_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4910", "commit_html_url": null, "file_loc": "{'base_commit': 'f108782e30369dedfc66f22d21c2b72c77941de7', 'files': [{'path': 'scripts/xy_grid.py', 'status': 'modified', 'Loc': {\"(None, 'confirm_samplers', 71)\": {'add': [74]}, \"('Script', 'process_axis', 276)\": {'add': [279]}}}, {'path': 'img2img.py', 'Loc': {}}, {'path': 'Line 102: sampler_index=sd_samplers.samplers_for_img2img[sampler_index].name', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["img2img.py", "scripts/xy_grid.py"], "doc": [], "test": [], "config": [], "asset": ["Line 102: sampler_index=sd_samplers.samplers_for_img2img[sampler_index].name"]}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "d9499f4301018ebd2977685d098381aa4111d2ae", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/13724", "iss_label": "enhancement", "title": "[Feature Request]: Sort items by date by default", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What would your feature do ?\n\nI hope to use time sorting by default when opening additional interfaces, so that I can immediately try the new model I just downloaded.\n\n### Proposed workflow\n\n1. Go to .... \r\n2. Press ....\r\n3. ...\r\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/d9499f4301018ebd2977685d098381aa4111d2ae", "file_loc": "{'base_commit': 'd9499f4301018ebd2977685d098381aa4111d2ae', 'files': [{'path': 'javascript/extraNetworks.js', 'status': 'modified', 'Loc': {\"(None, 'setupExtraNetworksForTab', 18)\": {'add': [51, 54, 98, 99], 'mod': [30, 56, 57, 58, 59, 65, 91, 92, 93, 94]}, '(None, None, None)': {'add': [115]}, \"(None, 'applyExtraNetworkSort', 116)\": {'add': [116]}}}, {'path': 'modules/shared_options.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [236]}}}, {'path': 'modules/ui_extra_networks.py', 'status': 'modified', 'Loc': {\"(None, 'create_ui', 357)\": {'add': [397], 'mod': [384, 385]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/shared_options.py", "modules/ui_extra_networks.py", "javascript/extraNetworks.js"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "22bcc7be428c94e9408f589966c2040187245d81", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/9102", "iss_label": "bug-report", "title": "[Bug]: Model Dropdown Select on Firefox is obscured by svelte pre-loader", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nSeems the latest pull request has added pre-loaders and I noticed that the model dropdown is constantly loading and therefore obscuring the dropdown from the user. This is only happening in Firefox, Chrome for example is fine.\r\n\r\n```\r\n.wrap.default.svelte-gjihhp {\r\ninset: 0;\r\n}\r\n```\r\n\r\nI just set it to `display: none` to access it\n\n### Steps to reproduce the problem\n\nLoad up in Firefox and try to change the model\r\n\r\nSee attached screenshot\r\n![Screenshot_1](https://user-images.githubusercontent.com/3169931/228311409-22be3832-0348-424c-9298-08e76cb166a7.jpg)\r\n\r\n\n\n### What should have happened?\n\nHave access to the model dropdown select\n\n### Commit where the problem happens\n\nf1db987\n\n### What platforms do you use to access the UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nMozilla Firefox\n\n### Command Line Arguments\n\n```Shell\nNo\n```\n\n\n### List of extensions\n\nNo\n\n### Console logs\n\n```Shell\nUncaught (in promise) TypeError: q[R[H]] is undefined\r\n ct http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\r\n ct http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\r\n ze http://127.0.0.1:7860/assets/Blocks.5efe22d4.js:76\n```\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/joysun545/stable-diffusion-webui/commit/22bcc7be428c94e9408f589966c2040187245d81", "file_loc": "{'base_commit': '22bcc7be428c94e9408f589966c2040187245d81', 'files': [{'path': 'modules/ui.py', 'status': 'modified', 'Loc': {\"(None, 'create_ui', 437)\": {'add': [1630]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "a8e3336a850e856188350a93e67d77c07c85b8af", "iss_html_url": "https://github.com/huggingface/transformers/issues/2008", "iss_label": "wontfix", "title": "Expand run_lm_finetuning.py to all models", "body": "## \ud83d\ude80 Feature\r\n\r\n[run_lm_finetuning.py](https://github.com/huggingface/transformers/blob/b0ee7c7df3d49a819c4d6cef977214bd91f5c075/examples/run_lm_finetuning.py) is a very useful tool for finetuning many models the library provided. But it doesn't cover all the models. Currently available models are:\r\n\r\n- gpt2\r\n- openai-gpt\r\n- bert\r\n- roberta\r\n- distilbert\r\n- camembert\r\n\r\nAnd not available ones:\r\n\r\n- ctrl\r\n- xlm\r\n- xlnet\r\n- transfo-xl\r\n- albert\r\n\r\n## Motivation\r\n\r\nMost important part of such a library is that it can be easily finetuned. `run_lm_finetuning.py` gives us that opportunity but why say no more :)\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/huggingface/transformers/commit/3dcb748e31be8c7c9e4f62926c5c144c62d07218\n\nhttps://github.com/huggingface/transformers/commit/a8e3336a850e856188350a93e67d77c07c85b8af", "file_loc": "{'base_commit': 'a8e3336a850e856188350a93e67d77c07c85b8af', 'files': [{'path': 'examples/ner/run_ner.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33], 'mod': [41]}}}, {'path': 'examples/ner/run_tf_ner.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [16, 17, 18, 19, 21, 22, 23, 24, 25, 37, 38, 39, 41, 42, 43, 44, 45, 52]}, \"(None, 'main', 457)\": {'mod': [512, 513, 523, 530, 565, 587, 614, 615]}}}, {'path': 'examples/run_glue.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32], 'mod': [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 676, 677, 683, 695]}, \"(None, 'train', 69)\": {'mod': [75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101]}, \"(None, 'main', 386)\": {'mod': [445, 625, 626, 632, 637]}}}, {'path': 'examples/run_language_modeling.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [40], 'mod': [43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 60, 61, 62, 789]}, \"('TextDataset', '__init__', 68)\": {'mod': [76, 77, 78, 79, 80, 81, 82, 83]}, \"(None, 'main', 464)\": {'mod': [696, 699, 701, 703, 706, 708, 712, 722, 730, 771, 772]}}}, {'path': 'examples/run_squad.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32], 'mod': [35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 75, 76, 77, 78, 79, 80, 81, 845]}, \"(None, 'train', 76)\": {'mod': [83, 84, 85, 86, 87, 88, 89, 90, 91]}, \"(None, 'main', 477)\": {'mod': [516, 760, 761, 765, 770, 820, 821]}}}, {'path': 'src/transformers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [160, 319]}}}, {'path': 'templates/adding_a_new_example_script/run_xxx.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [30], 'mod': [33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 75, 76, 77, 78, 79, 80, 709]}, \"(None, 'set_seed', 69)\": {'mod': [71, 72, 73]}, \"(None, 'main', 388)\": {'mod': [421, 629, 630, 634, 639, 690, 691]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["examples/run_squad.py", "templates/adding_a_new_example_script/run_xxx.py", "examples/run_glue.py", "src/transformers/__init__.py", "examples/ner/run_tf_ner.py", "examples/run_language_modeling.py", "examples/ner/run_ner.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "88d7f96e33c3f3e541bcdd913f2ff1e50aa18c1b", "iss_html_url": "https://github.com/huggingface/transformers/issues/5212", "iss_label": "", "title": "BartConfig wrong decoder_start_token_id?", "body": "# \ud83d\udc1b Bug\r\n\r\n## Information\r\n\r\nModel I am using (Bert, XLNet ...): Bart\r\n\r\nLanguage I am using the model on (English, Chinese ...): English\r\n\r\n## To reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n```\r\nfrom transformers import BartConfig, BartTokenizer\r\nconfig = BartConfig.from_pretrained('facebook/bart-large')\r\ntokenizer = BartTokenizer.from_pretrained('facebook/bart-large')\r\nconfig.decoder_start_token_id\r\n>>> 2\r\ntokenizer.bos_token_id\r\n>>> 0 # != config.decoder_start_token_id\r\ntokenizer.eos_token_id\r\n>>> 2\r\n```\r\n\r\nIt is misleading in the documentation of the function ```generate````\r\n\r\n*decoder_start_token_id=None \u2013 (optional) int If an encoder-decoder model starts decoding with a different token than BOS. Defaults to None and is changed to BOS later.*\r\n\r\n\r\n## Expected behavior\r\n\r\nI expect that decoder_start_token_id = tokenizer.bos_token_id, but maybe the model is designed to start decoding with EOS token.\r\n\r\n", "code": null, "pr_html_url": "https://github.com/huggingface/transformers/pull/5306", "commit_html_url": null, "file_loc": "{'base_commit': '88d7f96e33c3f3e541bcdd913f2ff1e50aa18c1b', 'files': [{'path': 'src/transformers/modeling_tf_utils.py', 'status': 'modified', 'Loc': {\"('TFPreTrainedModel', 'generate', 551)\": {'mod': [645, 646]}}}, {'path': 'src/transformers/modeling_utils.py', 'status': 'modified', 'Loc': {\"('PreTrainedModel', 'generate', 871)\": {'mod': [965, 966]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code\n\u91cc\u7684docstring"}, "loctype": {"code": ["src/transformers/modeling_utils.py", "src/transformers/modeling_tf_utils.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "ba9e3fe6267ea81a3d546b7aa5bcf0122f365e51", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1153", "iss_label": "bug", "title": "mermaid: Generating ..seq_flow/20240402032019.svg.. Error: Failed to launch the browser process!", "body": "**Bug description**\r\nGenerating /app/metagpt/workspace/jqdlhwap/resources/seq_flow/20240402032019.svg.. Error: Failed to launch the browser process!\r\n\r\n**Bug solved method**\r\n\r\n\r\n**Environment information**\r\n\"docker compose up -d\" after clone\r\nalready run \"npm install -g @mermaid-js/mermaid-cli\":\r\n\r\nroot@84a2e77496b0:/app/metagpt# mmdc -h\r\nUsage: mmdc [options]\r\n\r\nOptions:\r\n -V, --version output the version number\r\n -t, --theme [theme] Theme of the chart (choices: \"default\", \"forest\", \"dark\", \"neutral\", default: \"default\")\r\n -w, --width [width] Width of the page (default: 800)\r\n -H, --height [height] Height of the page (default: 600)\r\n -i, --input Input mermaid file. Files ending in .md will be treated as Markdown and all charts (e.g. ```mermaid (...)```) will be extracted and generated.\r\n Use `-` to read from stdin.\r\n -o, --output [output] Output file. It should be either md, svg, png or pdf. Optional. Default: input + \".svg\"\r\n -e, --outputFormat [format] Output format for the generated image. (choices: \"svg\", \"png\", \"pdf\", default: Loaded from the output file extension)\r\n -b, --backgroundColor [backgroundColor] Background color for pngs/svgs (not pdfs). Example: transparent, red, '#F0F0F0'. (default: \"white\")\r\n -c, --configFile [configFile] JSON configuration file for mermaid.\r\n -C, --cssFile [cssFile] CSS file for the page.\r\n -s, --scale [scale] Puppeteer scale factor (default: 1)\r\n -f, --pdfFit Scale PDF to fit chart\r\n -q, --quiet Suppress log output\r\n -p --puppeteerConfigFile [puppeteerConfigFile] JSON configuration file for puppeteer.\r\n -h, --help display help for command\r\n\r\n- LLM type and model name: zhipu-api / GLM-4\r\n- System version:\r\n- Python version:\r\n- MetaGPT version or branch: main\r\n\r\n`run in docker`\r\n\r\n- packages version:\r\n- installation method: \r\n\r\n**Screenshots or logs**\r\n2024-04-02 03:20:46.126 | INFO | metagpt.utils.mermaid:mermaid_to_file:48 - Generating /app/metagpt/workspace/jqdlhwap/resources/seq_flow/20240402032019.svg..\r\n2024-04-02 03:20:46.460 | WARNING | metagpt.utils.mermaid:mermaid_to_file:74 - \r\nError: Failed to launch the browser process!\r\n[0402/032046.449080:ERROR:zygote_host_impl_linux.cc(100)] Running as root without --no-sandbox is not supported. See https://crbug.com/638180.\r\n\r\n\r\nTROUBLESHOOTING: https://pptr.dev/troubleshooting\r\n\r\n at Interface.onClose (file:///usr/lib/node_modules/@mermaid-js/mermaid-cli/node_modules/@puppeteer/browsers/lib/esm/launch.js:253:24)\r\n at Interface.emit (node:events:524:35)\r\n at Interface.close (node:internal/readline/interface:526:10)\r\n at Socket.onend (node:internal/readline/interface:252:10)\r\n at Socket.emit (node:events:524:35)\r\n at endReadableNT (node:internal/streams/readable:1378:12)\r\n at process.processTicksAndRejections (node:internal/process/task_queues:82:21)\r\n", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1155", "commit_html_url": null, "file_loc": "{'base_commit': 'ba9e3fe6267ea81a3d546b7aa5bcf0122f365e51', 'files': [{'path': 'metagpt/configs/mermaid_config.py', 'status': 'modified', 'Loc': {\"('MermaidConfig', None, 13)\": {'mod': [16]}}}, {'path': 'metagpt/utils/mermaid.py', 'status': 'modified', 'Loc': {\"(None, 'mermaid_to_file', 17)\": {'add': [83]}}}, {'path': 'config/config2.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/configs/mermaid_config.py", "metagpt/utils/mermaid.py"], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "c779f6977ecbdba075d7c81519edd5eaf6de2d0e", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1197", "iss_label": "", "title": "Support for Cohere API ", "body": "Please add support for Cohere API with all the built in RAG and tool use functionalities. Essentially, RAG and tool use in Cohere are just chat parameters definable by users. More information can be found at https://docs.cohere.com/reference/chat .", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1193", "commit_html_url": null, "file_loc": "{'base_commit': 'c779f6977ecbdba075d7c81519edd5eaf6de2d0e', 'files': [{'path': 'metagpt/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [53]}}}, {'path': 'metagpt/rag/factories/ranker.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}, \"('RankerFactory', '__init__', 20)\": {'add': [24]}, \"('RankerFactory', None, 17)\": {'add': [47]}}}, {'path': 'metagpt/rag/schema.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [121]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [42]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/rag/schema.py", "metagpt/rag/factories/ranker.py", "setup.py", "metagpt/const.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "b5bb4d7e63e72c3d118e449a3763c1ff4411f159", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1547", "iss_label": "", "title": "ERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html", "body": "**Bug description**\r\nI installed playwright and its chrominum with the guidance and made configuration of mermaid. But it seems that the mermaid didn't work normally. \r\n```\r\nERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\r\nCall log:\r\nnavigating to \"file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\", waiting until \"load\"\r\n```\r\n\r\n**Bug solved method**\r\nI think it may do with the configuration but the document of this part is not so clear. I don't know how to fill in the \"path\" if I'm using playwright and whether it is right to keep that stuff in default.\r\n\r\nThis is my configuration:\r\n```yaml\r\nllm:\r\n api_type: 'openai' # or azure / ollama / groq etc. Check LLMType for more options\r\n api_key: '[MY_API_KEY]' # MY_API_KEY\r\n model: 'yi-lightning' # or gpt-3.5-turbo\r\n base_url: 'https://api.lingyiwanwu.com/v1' # or any forward url.\r\n # proxy: 'YOUR_LLM_PROXY_IF_NEEDED' # Optional. If you want to use a proxy, set it here.\r\n # pricing_plan: 'YOUR_PRICING_PLAN' # Optional. If your pricing plan uses a different name than the `model`.\r\n\r\nmermaid:\r\n engine: 'playwright' # nodejs/ink/playwright/pyppeteer\r\n # path: 'mmdc' # such as './node_modules/.bin/mmdc'\r\n # puppeteer_config: './config/puppeteer-config' # only for nodejs\r\n # pyppeteer_path: '/usr/bin/google-chrome-stable' # only for pyppeteer\r\n```\r\n\r\nThis is your example:\r\n```yaml\r\nmermaid:\r\n engine: 'nodejs' # nodejs/ink/playwright/pyppeteer\r\n path: 'mmdc' # such as './node_modules/.bin/mmdc'\r\n puppeteer_config: './config/puppeteer-config' # only for nodejs\r\n pyppeteer_path: '/usr/bin/google-chrome-stable' # only for pyppeteer\r\n```\r\n\r\n**Environment information**\r\n- LLM type and model name:\r\n- System version: ubuntu 22.04\r\n- Python version: 3.11\r\n- MetaGPT version or branch: 0.8\r\n\r\n\r\n\r\n- packages version:\r\n- installation method: pip\r\n\r\n**Screenshots or logs**\r\n2024-10-29 16:22:16.900 | WARNING | metagpt.utils.cost_manager:update_cost:49 - Model yi-lightning not found in TOKEN_COSTS.\r\n2024-10-29 16:22:16.903 | INFO | metagpt.utils.git_repository:rename_root:219 - Rename directory /root/workspace/cited_papaer_eval/workspace/20241029162137 to /root/workspace/cited_papaer_eval/workspace/scholar_cited_evaluation\r\n2024-10-29 16:22:16.904 | INFO | metagpt.utils.file_repository:save:57 - save to: /root/workspace/cited_papaer_eval/workspace/scholar_cited_evaluation/docs/prd/20241029162216.json\r\n2024-10-29 16:22:17.435 | ERROR | metagpt.utils.mmdc_playwright:mermaid_to_file:118 - Page.goto: net::ERR_FILE_NOT_FOUND at file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\r\nCall log:\r\nnavigating to \"file:///root/anaconda3/envs/metagpt/lib/python3.11/site-packages/metagpt/utils/index.html\", waiting until \"load\"\r\n", "code": null, "pr_html_url": "https://github.com/FoundationAgents/MetaGPT/pull/1564", "commit_html_url": null, "file_loc": "{'base_commit': 'b5bb4d7e63e72c3d118e449a3763c1ff4411f159', 'files': [{'path': 'metagpt/utils/mmdc_playwright.py', 'status': 'modified', 'Loc': {\"(None, 'mermaid_to_file', 17)\": {'mod': [84, 85, 86, 87]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/utils/mmdc_playwright.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7276f699fc85c611f1c3f83a19a368da9841e3a4", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/2892", "iss_label": "question", "title": "Flow as tool Usage", "body": "### Discussed in https://github.com/langflow-ai/langflow/discussions/2891\r\n\r\n
      \r\n\r\nOriginally posted by **pavansandeep2910** July 23, 2024\r\nI cannot understand how to load files to see them in flow as tool component. can anyone help me direct to flow as tool usage?
      ", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3093", "commit_html_url": null, "file_loc": "{'base_commit': '7276f699fc85c611f1c3f83a19a368da9841e3a4', 'files': [{'path': 'src/backend/base/langflow/components/prototypes/SubFlow.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [4, 6, 9, 10, 11, 12]}, \"('SubFlowComponent', 'build', 98)\": {'add': [103], 'mod': [102, 105, 108, 112, 114, 115]}, \"('SubFlowComponent', None, 15)\": {'mod': [15, 17, 18, 19, 22, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 98, 99, 100]}, \"('SubFlowComponent', 'get_flow_names', 24)\": {'mod': [25, 26]}, \"('SubFlowComponent', 'update_build_config', 35)\": {'mod': [36, 39, 41]}, \"('SubFlowComponent', 'add_inputs_to_build_config', 58)\": {'mod': [71]}}}, {'path': 'src/backend/base/langflow/initial_setup/setup.py', 'status': 'modified', 'Loc': {\"(None, 'load_starter_projects', 342)\": {'mod': [346]}}}, {'path': 'src/backend/base/langflow/inputs/inputs.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [231]}, \"('SecretStrInput', None, 215)\": {'mod': [229]}}}, {'path': 'src/frontend/src/CustomNodes/GenericNode/components/handleRenderComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 1)': {'mod': [1]}, '(None, None, 4)': {'mod': [4]}, '(None, None, 40)': {'mod': [40, 41, 42, 43]}, '(None, None, 103)': {'mod': [103]}}}, {'path': 'src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 352)': {'mod': [352, 353]}, '(None, None, 365)': {'mod': [365, 366]}}}, {'path': 'src/frontend/src/CustomNodes/hooks/use-handle-new-value.tsx', 'status': 'modified', 'Loc': {'(None, None, 66)': {'mod': [66, 67]}}}, {'path': 'src/frontend/src/components/parameterRenderComponent/component/refreshParameterComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 32)': {'mod': [32]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/components/prototypes/SubFlow.py", "src/backend/base/langflow/initial_setup/setup.py", "src/backend/base/langflow/inputs/inputs.py", "src/frontend/src/components/parameterRenderComponent/component/refreshParameterComponent/index.tsx", "src/frontend/src/CustomNodes/GenericNode/components/parameterComponent/index.tsx", "src/frontend/src/CustomNodes/GenericNode/components/handleRenderComponent/index.tsx", "src/frontend/src/CustomNodes/hooks/use-handle-new-value.tsx"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "b3b5290598f5970fd6a1a092fe4d11211008a04d", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/5378", "iss_label": "bug", "title": "URL component lost imported urls in tool mode when refresh UI", "body": "### Bug Description\n\nI have this issue when i build multi agent flow like import json below\r\n- URL Component Before refresh UI:\r\n![image](https://github.com/user-attachments/assets/f361501d-3d11-4e4a-b560-203faf8a4935)\r\n![image](https://github.com/user-attachments/assets/4ee6c93f-914b-4efa-b0eb-5a13f5c404fa)\r\n\r\nAfter refresh UI:\r\n![image](https://github.com/user-attachments/assets/2224a76e-1dd7-42d4-8531-a5e86719653c)\r\n![image](https://github.com/user-attachments/assets/3753754f-3a16-4bfc-a82e-1286c7504fe3)\r\n\r\n\r\nI have check in normal mode of URL Component, and this bug does not appear.\n\n### Reproduction\n\n1. Create flow\r\n2. Add URL component, change to Tool mode\r\n3. Input Urls\r\n4. Save flow\r\n5. Reload UI (Press F5)\n\n### Expected behavior\n\nURL component has keep Urls has input. In Tool mode\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nWindows 11/Docker\n\n### Langflow Version\n\nv1.1.1\n\n### Python Version\n\nNone\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n[Simple Agent (bug).json](https://github.com/user-attachments/files/18206463/Simple.Agent.bug.json)\r\n", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/5316", "commit_html_url": null, "file_loc": "{'base_commit': 'b3b5290598f5970fd6a1a092fe4d11211008a04d', 'files': [{'path': 'src/frontend/src/components/core/parameterRenderComponent/components/toggleShadComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 39)': {'mod': [39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54]}}}, {'path': 'src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx', 'status': 'modified', 'Loc': {'(None, None, 5)': {'add': [5]}, '(None, None, 44)': {'add': [44]}, '(None, None, 79)': {'add': [79]}, '(None, None, 98)': {'add': [98]}, '(None, None, 149)': {'add': [149]}, '(None, None, 11)': {'mod': [11]}, '(None, None, 23)': {'mod': [23, 24, 25]}, '(None, None, 145)': {'mod': [145, 146]}, '(None, None, 158)': {'mod': [158, 159]}, '(None, None, 174)': {'mod': [174]}, '(None, None, 291)': {'mod': [291]}, '(None, None, 403)': {'mod': [403, 404, 405, 407, 408, 409]}, '(None, None, 421)': {'mod': [421, 422, 423, 424, 425, 426]}, '(None, None, 471)': {'mod': [471, 472, 473, 474, 475, 476, 477, 478, 479, 480, 481, 482, 483, 484, 485]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/pages/FlowPage/components/nodeToolbarComponent/index.tsx", "src/frontend/src/components/core/parameterRenderComponent/components/toggleShadComponent/index.tsx"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "a4dc5381b2cf31c507cc32f9027f76bf00d61ccc", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3536", "iss_label": "bug", "title": "Prompt component does not pass variables correctly", "body": "### Bug Description\n\nI have prompt with two variables. \r\n{image_url} Value of image is https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\n{post_id} Value of Post ID is 11620\r\n\r\nPrompt is \r\nTask: Upload the image from the provided URL to WordPress and set it as the featured image for the specified post.\r\nImage URL: {image_url}\r\nPost ID: {post_id}\r\n\r\nThis worked on 1.0.15, but after I upgraded to 1.0.16 the second variable is not passed on, it repeats the image one.\r\n\r\nComponent output\r\nTask: Upload the image from the provided URL to WordPress and set it as the featured image for the specified post.\r\nImage URL: https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\nPost ID: https://oaidalleapiprodscus.blob.core.windows.net/private/org-N0bzhj17kGCdvkPCuGgpUuhO/user-RwGFQKsTTGw8hOO1ResFbpwQ/img-GH9Zkmf4RabhW48gVeZslvyb.png?st=2024-08-23T19%3A00%3A31Z&se=2024-08-23T21%3A00%3A31Z&sp=r&sv=2024-08-04&sr=b&rscd=inline&rsct=image/png&skoid=d505667d-d6c1-4a0a-bac7-5c84a87759f8&sktid=a48cca56-e6da-484e-a814-9c849652bcb3&skt=2024-08-23T18%3A20%3A28Z&ske=2024-08-24T18%3A20%3A28Z&sks=b&skv=2024-08-04&sig=HXgLmH4hDFvbuV/rPEX//ifP0PLXUUlCYP9ZDpFEAHM%3D\r\n\n\n### Reproduction\n\nCreate prompt with multiple variables.\r\nAdd value in each.\r\nTry to build prompt and you will find that only first variable is being pulled.\r\n![image](https://github.com/user-attachments/assets/cc478377-6c3d-4a71-ba13-ab6e1773413e)\r\n\r\n\r\n![image](https://github.com/user-attachments/assets/fad07891-e9d1-474f-b7a9-efb3084c4caf)\r\n\n\n### Expected behavior\n\nmultiple variables should work\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nRender\n\n### Langflow Version\n\n1.0.16\n\n### Python Version\n\n3.12\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3698", "commit_html_url": null, "file_loc": "{'base_commit': 'a4dc5381b2cf31c507cc32f9027f76bf00d61ccc', 'files': [{'path': 'src/backend/base/langflow/custom/custom_component/component.py', 'status': 'modified', 'Loc': {\"('Component', '__init__', 42)\": {'add': [71], 'mod': [45, 56]}, \"('Component', '_reset_all_output_values', 88)\": {'mod': [89, 90]}, \"('Component', '_build_state_model', 92)\": {'mod': [98]}, \"('Component', '__deepcopy__', 112)\": {'mod': [119]}, \"('Component', 'list_outputs', 166)\": {'mod': [170]}, \"('Component', 'get_output', 210)\": {'mod': [223, 224]}, \"('Component', 'set_output_value', 234)\": {'mod': [235, 236]}, \"('Component', 'map_outputs', 240)\": {'mod': [253, 257]}, \"('Component', 'map_inputs', 259)\": {'mod': [270]}, \"('Component', '_set_output_types', 290)\": {'mod': [291]}, \"('Component', 'get_output_by_method', 296)\": {'mod': [299]}, \"('Component', '_find_matching_output_method', 327)\": {'mod': [329]}, \"('Component', '__getattr__', 440)\": {'mod': [445, 446]}, \"('Component', '_set_outputs', 577)\": {'mod': [581]}, \"('Component', '_build_results', 619)\": {'mod': [623]}}}, {'path': 'src/backend/base/langflow/graph/graph/base.py', 'status': 'modified', 'Loc': {\"('Graph', '__apply_config', 318)\": {'mod': [322]}}}, {'path': 'src/backend/base/langflow/template/field/base.py', 'status': 'modified', 'Loc': {\"('Output', None, 161)\": {'add': [180]}}}, {'path': 'src/backend/tests/unit/test_custom_component.py', 'status': 'modified', 'Loc': {\"(None, 'test_custom_component_get_function_entrypoint_args_no_args', 397)\": {'add': [402]}}}, {'path': 'src/backend/tests/unit/test_database.py', 'status': 'modified', 'Loc': {\"(None, 'test_read_flow', 76)\": {'mod': [76, 79]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/template/field/base.py", "src/backend/base/langflow/graph/graph/base.py", "src/backend/base/langflow/custom/custom_component/component.py"], "doc": [], "test": ["src/backend/tests/unit/test_custom_component.py", "src/backend/tests/unit/test_database.py"], "config": [], "asset": []}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "20ceb42504087c712aaee41bfc17a870ae0109d4", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/2039", "iss_label": "enhancement", "title": "[Feat] Firecrawl\ud83d\udd25 Integration ", "body": "Hi all,\r\n\r\nOpening this issue after chatting with Rodrigo. It would be awesome to add a [Firecrawl](https://firecrawl.dev) web loader / tool for people to use it to scrape, crawl and extract LLM ready data from the web.\r\n\r\nWould love to hear your thoughts on how we can best integrate it.\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/2359", "commit_html_url": null, "file_loc": "{'base_commit': '20ceb42504087c712aaee41bfc17a870ae0109d4', 'files': [{'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, 2114)': {'add': [2114]}, '(None, None, 2436)': {'add': [2436]}, '(None, None, 2596)': {'add': [2596]}, '(None, None, 2600)': {'add': [2600]}, '(None, None, 4618)': {'add': [4618]}, '(None, None, 6082)': {'add': [6082]}, '(None, None, 2435)': {'mod': [2435]}, '(None, None, 2595)': {'mod': [2595]}, '(None, None, 2599)': {'mod': [2599]}, '(None, None, 4617)': {'mod': [4617]}, '(None, None, 6085)': {'mod': [6085]}, '(None, None, 10555)': {'mod': [10555]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, 94)': {'add': [94]}}}, {'path': 'src/backend/base/poetry.lock', 'status': 'modified', 'Loc': {'(None, None, 741)': {'add': [741]}, '(None, None, 3238)': {'mod': [3238]}}}, {'path': 'src/backend/base/pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, 66)': {'add': [66]}}}, {'path': 'src/frontend/src/utils/styleUtils.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [173, 365]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/utils/styleUtils.ts"], "doc": [], "test": [], "config": ["pyproject.toml", "poetry.lock", "src/backend/base/poetry.lock", "src/backend/base/pyproject.toml"], "asset": []}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "9d8009f2f5c5e3fd3bf47760debc787deb454b1a", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3004", "iss_label": "bug", "title": "Problem with Global Variables Setting Page", "body": "### Bug Description\r\n\r\nWhen entering http://127.0.0.1:7860/settings/global-variables\r\n\r\nI am getting error in JS console.\r\n```\r\n`DialogContent` requires a `DialogTitle` for the component to be accessible for screen reader users.\r\n\r\nIf you want to hide the `DialogTitle`, you can wrap it with our VisuallyHidden component.\r\n\r\nFor more information, see https://radix-ui.com/primitives/docs/components/dialog [index-BMduUo-e.js:3231:165](http://127.0.0.1:7860/assets/index-BMduUo-e.js)\r\n TitleWarning http://127.0.0.1:7860/assets/index-BMduUo-e.js:3231\r\n Qj http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Hk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Wk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Pk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Ek http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n jg http://127.0.0.1:7860/assets/index-BMduUo-e.js:982\r\n Wk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Pk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Gk http://127.0.0.1:7860/assets/index-BMduUo-e.js:984\r\n Ct http://127.0.0.1:7860/assets/index-BMduUo-e.js:969\r\n xt http://127.0.0.1:7860/assets/index-BMduUo-e.js:969\r\n```\r\n\r\nI am also getting error when adding again earlier deleted variable:\r\n\"Sorry, we found an unexpected error!\r\nPlease report errors with detailed tracebacks on the [GitHub Issues](https://github.com/langflow-ai/langflow/issues) page.\r\nThank you!\"\r\n\r\nSo as asked, I am kindly reporting it.\r\n\r\nAlso, There is no feature to edit fields.\r\n\r\n\r\n### Reproduction\r\n\r\nJS error:\r\n1. Just enter the page\r\n\r\nSaving Error:\r\n1. saving new variable\r\n2. deleting this new variable\r\n3. adding it again with same name\r\n\r\n\r\n### Expected behavior\r\n\r\nIt should work without error\r\n\r\n### Who can help?\r\n\r\n_No response_\r\n\r\n### Operating System\r\n\r\nWindows 11 pro\r\n\r\n### Langflow Version\r\n\r\n1.13\r\n\r\n### Python Version\r\n\r\nNone\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Flow File\r\n\r\n_No response_", "code": null, "pr_html_url": "https://github.com/langflow-ai/langflow/pull/3284", "commit_html_url": null, "file_loc": "{'base_commit': '9d8009f2f5c5e3fd3bf47760debc787deb454b1a', 'files': [{'path': 'src/backend/base/langflow/api/v1/variable.py', 'status': 'modified', 'Loc': {\"(None, 'create_variable', 17)\": {'add': [22], 'mod': [26, 27, 28, 29, 30, 31, 32, 33, 34, 36, 37, 39, 40, 42, 44, 46, 47, 48, 49, 50, 51, 52]}, \"(None, 'read_variables', 60)\": {'add': [63], 'mod': [67, 68]}, \"(None, 'update_variable', 74)\": {'add': [79], 'mod': [83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95]}, \"(None, 'delete_variable', 101)\": {'add': [105], 'mod': [109, 110, 111, 112, 113, 114, 115]}, '(None, None, None)': {'mod': [1, 5, 7, 10, 11]}}}, {'path': 'src/backend/base/langflow/services/variable/base.py', 'status': 'modified', 'Loc': {\"('VariableService', None, 11)\": {'add': [84], 'mod': [72]}}}, {'path': 'src/backend/base/langflow/services/variable/kubernetes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [12, 13]}, \"('KubernetesSecretService', None, 16)\": {'add': [123], 'mod': [113, 114, 115, 116, 117, 118]}}}, {'path': 'src/backend/base/langflow/services/variable/service.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1], 'mod': [11]}, \"('DatabaseVariableService', 'get_variable', 68)\": {'add': [78], 'mod': [86, 87]}, \"('DatabaseVariableService', None, 22)\": {'add': [90, 111]}, \"('DatabaseVariableService', 'list_variables', 91)\": {'mod': [92, 93]}, \"('DatabaseVariableService', 'delete_variable', 112)\": {'mod': [118, 123]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["src/backend/base/langflow/api/v1/variable.py", "src/backend/base/langflow/services/variable/base.py", "src/backend/base/langflow/services/variable/service.py", "src/backend/base/langflow/services/variable/kubernetes.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "1e98d349877305a8ee9c84901282b5731675578f", "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/803", "iss_label": "", "title": "debug mode not debug", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\nattribute in chat.py is not named correctly\n\n### Current behavior \ud83d\ude2f\n\nwrong name can't call the attrb on the object\n\n### Expected behavior \ud83e\udd14\n\nto use the attrb in a call without error\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\n# Paste your prompt here\r\n", "code": null, "pr_html_url": "https://github.com/Significant-Gravitas/AutoGPT/pull/888", "commit_html_url": null, "file_loc": "{'base_commit': '1e98d349877305a8ee9c84901282b5731675578f', 'files': [{'path': 'scripts/chat.py', 'status': 'modified', 'Loc': {\"(None, 'chat_with_ai', 42)\": {'mod': [67, 74, 113]}}}, {'path': 'scripts/json_parser.py', 'status': 'modified', 'Loc': {\"(None, 'fix_json', 76)\": {'mod': [94]}}}, {'path': 'scripts/main.py', 'status': 'modified', 'Loc': {\"(None, 'parse_arguments', 266)\": {'add': [268]}, '(None, None, None)': {'add': [294]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/json_parser.py", "scripts/main.py", "scripts/chat.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "0f152b4e97a102c0105f26d76d6e1bba3b12fc2a", "iss_html_url": "https://github.com/fastapi/fastapi/issues/894", "iss_label": "bug\nanswered\nreviewed", "title": "RecursionError from response model in 0.47.1", "body": "### Describe the bug\r\n\r\nFastAPI 0.47.1 will not be able to start due to a `RecursionError` when there is a circular reference among models. The issue seems to originate from https://github.com/tiangolo/fastapi/pull/889. This works fine in 0.46.0.\r\n\r\n### Environment\r\n\r\n- OS: Windows\r\n- FastAPI Version: 0.47.1\r\n- Python version: 3.7.0\r\n\r\n### To Reproduce\r\n\r\n```Python\r\nfrom typing import Optional\r\n\r\nfrom fastapi import FastAPI\r\nfrom pydantic import BaseModel, Field\r\n\r\n\r\nclass Group(BaseModel):\r\n representative: Optional['Person'] = Field(None)\r\n\r\n\r\nclass Person(BaseModel):\r\n group: Optional[Group] = Field(None)\r\n\r\n\r\nGroup.update_forward_refs()\r\n\r\n\r\napp = FastAPI()\r\n\r\n\r\n@app.get('/group/{group_id}', response_model=Group)\r\ndef get_group(group_id):\r\n return []\r\n```\r\n\r\n### Expected behavior\r\n\r\nNo exception\r\n\r\n\r\n### Actual output\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"test.py\", line 21, in \r\n @app.get('/group/{group_id}', response_model=Group)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 494, in decorator\r\n callbacks=callbacks,\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 438, in add_api_route\r\n callbacks=callbacks,\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\routing.py\", line 275, in __init__\r\n ] = create_cloned_field(self.response_field)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 100, in create_cloned_field\r\n use_type.__fields__[f.name] = create_cloned_field(f)\r\n [Previous line repeated 981 more times]\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\fastapi\\utils.py\", line 97, in create_cloned_field\r\n original_type.__name__, __config__=original_type.__config__\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\pydantic\\main.py\", line 773, in create_model\r\n return type(model_name, (__base__,), namespace)\r\n File \"D:\\virtualenvs\\test\\lib\\site-packages\\pydantic\\main.py\", line 152, in __new__\r\n if issubclass(base, BaseModel) and base != BaseModel:\r\n File \"D:\\virtualenvs\\test\\lib\\abc.py\", line 143, in __subclasscheck__\r\n return _abc_subclasscheck(cls, subclass)\r\nRecursionError: maximum recursion depth exceeded in comparison\r\n```", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/fastapi/fastapi/commit/0f152b4e97a102c0105f26d76d6e1bba3b12fc2a", "file_loc": "{'base_commit': '0f152b4e97a102c0105f26d76d6e1bba3b12fc2a', 'files': [{'path': 'fastapi/utils.py', 'status': 'modified', 'Loc': {\"(None, 'create_cloned_field', 134)\": {'mod': [134, 141, 142, 143, 160, 163]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/utils.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "05676caf70db7f3715cf6a3b4680f15efd45977a", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/6202", "iss_label": "bug\nstale", "title": "Llama-cpp-python 0.2.81 'already loaded' fails to load models", "body": "### Describe the bug\r\n\r\nAttempting to load a model after running the update-wizard-macos today (the version from a day or two ago worked fine) fails with the stack trace log included below. \r\n\r\nNotably, the error message references [this new issue in llama-cpp-python](https://github.com/abetlen/llama-cpp-python/issues/1575).\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n- Run the update wizard to update software.\r\n- Attempt to load a gguf model using the GPU and llama.cpp\r\n- Observe that loading fails.\r\n\r\n### Screenshot\r\n\r\n![Screenshot 2024-07-04 at 11 10 47\u202fPM](https://github.com/oobabooga/text-generation-webui/assets/9359101/72148b05-8a43-4d2e-9fd5-7ba6fa57b317)\r\n\r\n\r\n### Logs\r\n\r\n```shell\r\nTraceback (most recent call last):\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/ui_model_menu.py\", line 246, in load_model_wrapper\r\n shared.model, shared.tokenizer = load_model(selected_model, loader)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/models.py\", line 94, in load_model\r\n output = load_func_map[loader](model_name)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/models.py\", line 275, in llamacpp_loader\r\n model, tokenizer = LlamaCppModel.from_pretrained(model_file)\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/llamacpp_model.py\", line 39, in from_pretrained\r\n LlamaCache = llama_cpp_lib().LlamaCache\r\n File \"/Users/patrickleiser/Documents/Programming/AI/text-generation-webui/modules/llama_cpp_python_hijack.py\", line 38, in llama_cpp_lib\r\n raise Exception(f\"Cannot import 'llama_cpp_cuda' because '{imported_module}' is already imported. See issue #1575 in llama-cpp-python. Please restart the server before attempting to use a different version of llama-cpp-python.\")\r\nException: Cannot import 'llama_cpp_cuda' because 'llama_cpp' is already imported. See issue #1575 in llama-cpp-python. Please restart the server before attempting to use a different version of llama-cpp-python.\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nM1 Max Macbook Pro, MacOS 14.5\r\n```\r\n\r\nEdit: Just realized that Ooobabooga was the one that created that issue on the llama-cpp-python project, so I guess this error was already known. Sorry if this issue is therefore somewhat redundant\r\n\r\n", "code": null, "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/6227", "commit_html_url": null, "file_loc": "{'base_commit': '05676caf70db7f3715cf6a3b4680f15efd45977a', 'files': [{'path': 'modules/llama_cpp_python_hijack.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, \"(None, 'llama_cpp_lib', 13)\": {'mod': [16, 17, 18, 19, 20, 21, 22, 24, 26, 28, 29, 30, 31, 32, 33, 34, 35, 37, 38, 39, 40, 41, 42, 43, 44, 46, 47, 48, 49, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62, 64, 65, 67]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/llama_cpp_python_hijack.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "3c076c3c8096fa83440d701ba4d7d49606aaf61f", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2958", "iss_label": "bug", "title": "Latest version of Pillow breaks current implementation in html_generator.py.", "body": "### Describe the bug\n\nPillow 10.0.0 removed `ANTIALIAS` from `PIL.Image`. Current implementation requires 9.5.0, however the requirements.txt currently allows for 10.0.0 to be installed.\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nAdd new characters with png images and load the webui in chat mode.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nTraceback (most recent call last):\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\routes.py\", line 427, in run_predict\r\n output = await app.get_blocks().process_api(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\blocks.py\", line 1323, in process_api\r\n result = await self.call_function(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\gradio\\blocks.py\", line 1051, in call_function\r\n prediction = await anyio.to_thread.run_sync(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\to_thread.py\", line 33, in run_sync\r\n return await get_asynclib().run_sync_in_worker_thread(\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 877, in run_sync_in_worker_thread\r\n return await future\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\installer_files\\env\\lib\\site-packages\\anyio\\_backends\\_asyncio.py\", line 807, in run\r\n result = context.run(func, *args)\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\extensions\\gallery\\script.py\", line 71, in generate_html\r\n image_html = f''\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\modules\\html_generator.py\", line 144, in get_image_cache\r\n img = make_thumbnail(Image.open(path))\r\n File \"G:\\F\\Projects\\AI\\text-generation-webui\\one-click-installers-test\\text-generation-webui\\modules\\html_generator.py\", line 132, in make_thumbnail\r\n image = ImageOps.fit(image, (350, 470), Image.ANTIALIAS)\r\nAttributeError: module 'PIL.Image' has no attribute 'ANTIALIAS'\n```\n\n\n### System Info\n\n```shell\nWindows 10\r\nGPU: GTX 1080ti\n```\n", "code": null, "pr_html_url": "https://github.com/oobabooga/text-generation-webui/pull/2954", "commit_html_url": null, "file_loc": "{'base_commit': '3c076c3c8096fa83440d701ba4d7d49606aaf61f', 'files': [{'path': 'modules/html_generator.py', 'status': 'modified', 'Loc': {\"(None, 'make_thumbnail', 129)\": {'mod': [132]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Other\n\u4f9d\u8d56\u58f0\u660e "}, "loctype": {"code": ["modules/html_generator.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "cf2c4e740b1d06e145c1992515d9b34e18affc95", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/801", "iss_label": "enhancement", "title": "How can we disable Gradio analytics?", "body": "**Description**\r\n\r\nHow where / can this be implemented?\r\n\r\nhttps://github.com/brkirch/stable-diffusion-webui/commit/a534959cbcabc95af50fbbe4654f8c0ee1cdd41c\r\n\r\n`os.environ['GRADIO_ANALYTICS_ENABLED'] = 'False'`\r\n\r\n**Additional Context**\r\n\r\nFor [stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui)\r\n\r\n[preserve privacy by disabling gradio analytics globally\r\n#8658 ](https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/8658)\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/cf2c4e740b1d06e145c1992515d9b34e18affc95", "file_loc": "{'base_commit': 'cf2c4e740b1d06e145c1992515d9b34e18affc95', 'files': [{'path': 'server.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["server.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "a3085dba073fe8bdcfb5120729a84560f5d024c3", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/1000", "iss_label": "bug", "title": "Bot spams random numbers or does not load", "body": "### Describe the bug\n\nHello,\r\nI installed oobabooga with the one click installer and I can not load the facebook_opt-2.7b (I copied the console into the log).\r\nI also installed the gpt4x alpaca model with the automatic installer(download-model.bat). If I chat with it, it just spams random 2 and 4 (I took a screenshot and pasted it down below). If I manually install the gpt4x model (with the help of this tutorial: https://youtu.be/nVC9D9fRyNU?t=162 ), I get the same output as the Facebook model in the log. \n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\n1. Automatic installer\r\n2. let download-model.bat download anon8231489123/gpt4-x-alpaca-13b-native-4bit-128g or follow this tutorial:\r\n3. let download-model.bat download one model from the list\r\n4. start-webui.bat has following arguments: python server.py --auto-devices --chat --wbits 4 --groupsize 128\n\n### Screenshot\n\n![image](https://user-images.githubusercontent.com/125409728/230895729-d2f12173-81a5-4de6-9296-71845906ab01.png)\r\n\n\n### Logs\n\n```shell\nStarting the web UI...\r\n\r\n===================================BUG REPORT===================================\r\nWelcome to bitsandbytes. For bug reports, please submit your error trace to: https://github.com/TimDettmers/bitsandbytes/issues\r\n================================================================================\r\nCUDA SETUP: CUDA runtime path found: C:\\Users\\Caner\\Documents\\oobabooga-windows\\installer_files\\env\\bin\\cudart64_110.dll\r\nCUDA SETUP: Highest compute capability among GPUs detected: 8.6\r\nCUDA SETUP: Detected CUDA version 117\r\nCUDA SETUP: Loading binary C:\\Users\\Caner\\Documents\\oobabooga-windows\\installer_files\\env\\lib\\site-packages\\bitsandbytes\\libbitsandbytes_cuda117.dll...\r\nThe following models are available:\r\n\r\n1. anon8231489123_gpt4-x-alpaca-13b-native-4bit-128g\r\n2. facebook_opt-2.7b\r\n\r\nWhich one do you want to load? 1-2\r\n\r\n2\r\n\r\nLoading facebook_opt-2.7b...\r\nCould not find the quantized model in .pt or .safetensors format, exiting...\r\nDr\u00fccken Sie eine beliebige Taste . . .\n```\n\n\n### System Info\n\n```shell\nWindows 10 Version 22H2, Amd Ryzen 5800x, Palit Gamingpro Rtx 3080.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/a3085dba073fe8bdcfb5120729a84560f5d024c3", "file_loc": "{'base_commit': 'a3085dba073fe8bdcfb5120729a84560f5d024c3', 'files': [{'path': 'modules/models.py', 'status': 'modified', 'Loc': {\"(None, 'load_model', 40)\": {'add': [176]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/models.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "c4aa1a42b156b9c5ddcfb060cc497b2fba55430f", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/1203", "iss_label": "bug", "title": "When I click \"Click Me\" on (Character screen) it no longer generates a log of the current chat/instruction. ", "body": "### Describe the bug\n\nWhen I click \"Click Me\" on (Character screen) it no longer generates a log of the current chat/instruction. \r\n\r\nThis either got caused when I did an update to the whole program. Reinstall does not fix. I also ran some Antivirus and registry changes. I can generate new note pads and I reinstalled notepad on my PC. \r\n\r\nI've of course restarted my pc and I've tried firefox and opera as the host browser. This is a new problem just from today. But I did a few things on my PC. \r\n\r\nNote pad is also missing from my right click \"Create new\" list. However it is is in my folders create new list. The one you can click on in the top menu settings inside explorer. The first thing is I guess I need to rule out if others having the issue or not. Next it would be nice to get some utility to auto save all logs or something as an option. \r\n\r\nThanks for any advice anyone. If it's a bug I will simply wait for a fix. It's possible I may have messed something but but it may also be a design flaw of the program if it's dependant on just one specific thing to save the chat log. \r\n\r\nI have no logs or other info to share at this time. \n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nFor me it's easy. Issue persists even if I restart PC, update or reinstall oobabooga. It worked last night. Today it does not work. \n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nNot applicable.\n```\n\n\n### System Info\n\n```shell\n12700K, Nvidia 4080, Windows 10. Running locally on my PC not a colab etc. Like I said I tried firefox and opera. Issue seems persistent.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/c4aa1a42b156b9c5ddcfb060cc497b2fba55430f", "file_loc": "{'base_commit': 'c4aa1a42b156b9c5ddcfb060cc497b2fba55430f', 'files': [{'path': 'server.py', 'status': 'modified', 'Loc': {\"(None, 'create_model_menus', 251)\": {'mod': [324, 349]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["server.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2326", "iss_label": "bug", "title": "extensions/openai KeyError: 'assistant'", "body": "### Describe the bug\r\n\r\nStarting after [https://github.com/oobabooga/text-generation-webui/pull/2291]\r\n\r\nWhich I think it's a great improvement.\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\nStart server with extension --openai --model openaccess-ai-collective_manticore-13b.\r\nStarting [DGdev91 Auto-GPT](https://github.com/DGdev91/Auto-GPT), runs 1 cycle, give 'y' for the second, the error appears.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nopenaccess-ai-collectiveException occurred during processing of request from ('127.0.0.1', 42032)\r\nTraceback (most recent call last):\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 683, in process_request_thread\r\n self.finish_request(request, client_address)\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 360, in finish_request\r\n self.RequestHandlerClass(request, client_address, self)\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/socketserver.py\", line 747, in __init__\r\n self.handle()\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/http/server.py\", line 433, in handle\r\n self.handle_one_request()\r\n File \"/home/mihai/miniconda3/envs/textgen/lib/python3.10/http/server.py\", line 421, in handle_one_request\r\n method()\r\n File \"/home/mihai/text-generation-webui/extensions/openai/script.py\", line 310, in do_POST\r\n msg = role_formats[role].format(message=content)\r\nKeyError: 'assistant'\r\n----------------------------------------\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\nWin11 WSL2 Ubuntu 20.04\r\nPython 3.10\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/oobabooga/text-generation-webui/commit/2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df", "file_loc": "{'base_commit': '2b7ba9586fb80cfbc47c77ad7bbbb03f7d6bc0df', 'files': [{'path': 'extensions/openai/script.py', 'status': 'modified', 'Loc': {\"('Handler', 'do_POST', 159)\": {'mod': [262]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["extensions/openai/script.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "92a0994f01ec6ae7756951312a70e101fb33c7e5", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/597", "iss_label": "", "title": "The app starts transparent!", "body": "Seems like everything was ok with the install.\r\n\r\nWhen I run I get the error: / warning:\r\n\r\n```\r\n> python run.py --execution-provider cuda\r\nException in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 1921, in __call__\r\n return self.func(*args)\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 839, in callit\r\n func(*args)\r\n File \"C:\\Users\\dulci\\Documents\\Development\\Deep-Live-Cam\\.venv\\lib\\site-packages\\customtkinter\\windows\\widgets\\scaling\\scaling_tracker.py\", line \r\n186, in check_dpi_scaling\r\n window.block_update_dimensions_event()\r\n File \"C:\\Users\\dulci\\.pyenv\\pyenv-win\\versions\\3.10.6\\lib\\tkinter\\__init__.py\", line 2383, in __getattr__\r\n return getattr(self.tk, attr)\r\nAttributeError: '_tkinter.tkapp' object has no attribute 'block_update_dimensions_event'\r\n(.venv) PS C:\\Users\\dulci\\Documents\\Development\\Deep-Live-Cam>\r\n``` \r\n\r\nAnd the application starts so transparent (opacity super low) that I can barely see it and I can't use it, because it is almost invisible against the desktop background.\r\n\r\nCan anybody suggest how to fix it?\r\n\r\nI am working with venv python 3.10.6.\r\n", "code": null, "pr_html_url": "https://github.com/hacksider/Deep-Live-Cam/pull/632", "commit_html_url": null, "file_loc": "{'base_commit': '92a0994f01ec6ae7756951312a70e101fb33c7e5', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 2)': {'add': [2]}, '(None, None, 3)': {'add': [3]}, '(None, None, 37)': {'add': [37]}, '(None, None, 47)': {'add': [47]}, '(None, None, 68)': {'add': [68]}, '(None, None, 168)': {'add': [168]}, '(None, None, 365)': {'add': [365]}, '(None, None, 6)': {'mod': [6]}, '(None, None, 8)': {'mod': [8]}, '(None, None, 10)': {'mod': [10]}, '(None, None, 12)': {'mod': [12]}, '(None, None, 15)': {'mod': [15]}, '(None, None, 20)': {'mod': [20]}, '(None, None, 24)': {'mod': [24]}, '(None, None, 28)': {'mod': [28]}, '(None, None, 32)': {'mod': [32]}, '(None, None, 36)': {'mod': [36]}, '(None, None, 39)': {'mod': [39, 40, 41]}, '(None, None, 43)': {'mod': [43, 44, 46]}, '(None, None, 49)': {'mod': [49, 50, 51, 52, 53, 54, 55, 56, 57]}, '(None, None, 59)': {'mod': [59]}, '(None, None, 61)': {'mod': [61, 62]}, '(None, None, 64)': {'mod': [64]}, '(None, None, 66)': {'mod': [66, 67]}, '(None, None, 71)': {'mod': [71, 72]}, '(None, None, 75)': {'mod': [75]}, '(None, None, 77)': {'mod': [77]}, '(None, None, 82)': {'mod': [82]}, '(None, None, 84)': {'mod': [84, 85, 86]}, '(None, None, 91)': {'mod': [91, 92]}, '(None, None, 96)': {'mod': [96]}, '(None, None, 98)': {'mod': [98, 100]}, '(None, None, 105)': {'mod': [105, 106]}, '(None, None, 110)': {'mod': [110]}, '(None, None, 112)': {'mod': [112, 113]}, '(None, None, 118)': {'mod': [118, 119]}, '(None, None, 123)': {'mod': [123]}, '(None, None, 125)': {'mod': [125, 126]}, '(None, None, 131)': {'mod': [131, 132]}, '(None, None, 136)': {'mod': [136]}, '(None, None, 138)': {'mod': [138, 139]}, '(None, None, 144)': {'mod': [144, 145]}, '(None, None, 150)': {'mod': [150, 151]}, '(None, None, 153)': {'mod': [153, 154]}, '(None, None, 156)': {'mod': [156]}, '(None, None, 158)': {'mod': [158, 159, 160, 161, 162]}, '(None, None, 164)': {'mod': [164]}, '(None, None, 166)': {'mod': [166, 167]}, '(None, None, 170)': {'mod': [170]}, '(None, None, 197)': {'mod': [197]}, '(None, None, 206)': {'mod': [206]}, '(None, None, 210)': {'mod': [210]}, '(None, None, 224)': {'mod': [224]}, '(None, None, 237)': {'mod': [237]}, '(None, None, 247)': {'mod': [247]}, '(None, None, 274)': {'mod': [274]}, '(None, None, 306)': {'mod': [306]}, '(None, None, 314)': {'mod': [314]}, '(None, None, 343)': {'mod': [343, 344]}, '(None, None, 346)': {'mod': [346, 347]}, '(None, None, 353)': {'mod': [353]}, '(None, None, 360)': {'mod': [360]}}}, {'path': 'modules/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6, 548], 'mod': [10, 13, 18, 19, 22, 23, 24, 25, 27, 28, 29, 30, 32, 33, 34, 35, 37, 38]}, \"(None, 'analyze_target', 154)\": {'add': [167], 'mod': [155, 163, 166]}, \"(None, 'create_source_target_popup', 176)\": {'add': [182], 'mod': [191, 192, 200, 201, 203, 204, 206, 207, 210, 211, 214, 215, 217, 218, 221, 222, 224]}, \"(None, 'update_popup_source', 221)\": {'add': [233], 'mod': [228, 229, 238, 240, 241, 242, 243, 245, 246, 249, 250]}, \"(None, 'select_source_path', 290)\": {'add': [299, 302], 'mod': [294]}, \"(None, 'swap_faces_paths', 305)\": {'add': [323, 326]}, \"(None, 'select_target_path', 329)\": {'add': [338, 343, 346], 'mod': [333]}, \"(None, 'update_preview', 435)\": {'add': [445], 'mod': [437, 438, 441, 443, 444, 450]}, \"(None, 'init', 61)\": {'mod': [61]}, \"(None, 'create_root', 70)\": {'mod': [70, 73, 74, 75, 77, 78, 79, 80, 81, 83, 84, 86, 87, 89, 90, 92, 93, 95, 96, 99, 100, 103, 104, 106, 107, 108, 109, 112, 113, 116, 117, 119, 121, 122, 124, 125, 126, 129, 130, 132, 133, 135, 136, 138, 139, 141, 142, 144, 145, 147, 148, 149, 150]}, \"(None, 'on_submit_click', 184)\": {'mod': [189]}, \"(None, 'on_button_click', 194)\": {'mod': [195, 197, 198]}, \"(None, 'create_preview', 258)\": {'mod': [263, 265, 268, 269, 271]}, \"(None, 'select_output_path', 349)\": {'mod': [353, 355]}, \"(None, 'check_and_ignore_nsfw', 364)\": {'mod': [365, 367, 370, 372, 375, 376, 378]}, \"(None, 'fit_image_to_size', 381)\": {'mod': [390]}, \"(None, 'toggle_preview', 417)\": {'mod': [418]}, \"(None, 'init_preview', 425)\": {'mod': [431]}, \"(None, 'create_webcam_preview', 463)\": {'mod': [466, 467, 468, 469, 471, 484, 487, 512]}, \"(None, 'on_submit_click', 527)\": {'mod': [533]}, \"(None, 'create_source_target_popup_for_webcam', 519)\": {'mod': [540, 543, 546]}, \"(None, 'refresh_data', 550)\": {'mod': [553, 554, 563, 565, 566, 568, 569, 571, 572, 575, 576, 579, 580, 581, 584, 585, 588, 589, 590]}, \"(None, 'update_webcam_source', 593)\": {'mod': [596, 600, 601, 610, 612, 613, 614, 615, 617, 618, 621, 622, 623, 624]}, \"(None, 'update_webcam_target', 629)\": {'mod': [629, 632, 636, 637, 646, 648, 649, 650, 651, 653, 654, 657, 658, 659, 660]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "Textualize", "repo_name": "rich", "base_commit": "4be4f6bd24d2a35da0e50df943209ad24c068159", "iss_html_url": "https://github.com/Textualize/rich/issues/388", "iss_label": "enhancement\ndone", "title": "[REQUEST] minimal table width", "body": "hey @willmcgugan, great package!\r\n\r\nin `Table.add_column` method it would be nice to have an _actual_ minimal with option. [The documentation](https://github.com/willmcgugan/rich/blob/c98bf070e4f3785dbb050b72c09663021c5b1d73/rich/table.py#L303) says that `width` argument sets the minimal with for a column, but in fact in my tests it sets a constant width for one. I'd like my column to be at least `width` wide but expand if there is a longer string to display.", "code": null, "pr_html_url": "https://github.com/Textualize/rich/pull/391", "commit_html_url": null, "file_loc": "{'base_commit': '4be4f6bd24d2a35da0e50df943209ad24c068159', 'files': [{'path': 'CHANGELOG.md', 'status': 'modified', 'Loc': {'(None, None, 26)': {'add': [26]}}}, {'path': 'rich/console.py', 'status': 'modified', 'Loc': {\"('Console', 'rule', 991)\": {'mod': [993]}}}, {'path': 'rich/measure.py', 'status': 'modified', 'Loc': {\"('Measurement', None, 11)\": {'add': [45]}, \"('Measurement', 'with_maximum', 34)\": {'mod': [41]}}}, {'path': 'rich/table.py', 'status': 'modified', 'Loc': {\"('Column', None, 29)\": {'add': [50, 54]}, \"('Table', '__init__', 118)\": {'add': [123, 157]}, \"('Table', 'add_column', 278)\": {'add': [288, 303, 321]}, \"('Table', '_calculate_column_widths', 410)\": {'add': [417], 'mod': [458, 459]}, \"('Table', '_measure_column', 558)\": {'add': [587]}, \"('Table', '__rich_measure__', 241)\": {'mod': [249, 254, 255, 265]}, '(None, None, None)': {'mod': [740, 741, 743, 745, 746, 747, 748, 749, 750, 751, 753, 754, 755, 756, 757, 758, 759, 761]}}}, {'path': 'tests/test_measure.py', 'status': 'modified', 'Loc': {\"(None, 'test_measure_renderables', 29)\": {'add': [33]}}}, {'path': 'tests/test_table.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 125]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/measure.py", "rich/console.py", "rich/table.py"], "doc": ["CHANGELOG.md"], "test": ["tests/test_table.py", "tests/test_measure.py"], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "9fa57892790ce205634f6a7c83de2b9e52ab5284", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/8799", "iss_label": "site-support-request\naccount-needed", "title": "Request support site: Viceland", "body": "Viceland is a new channel from Vice. Website at https://www.viceland.com. Appears to use Uplynk, and may be encrypted, so it may not be possible.\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/9fa57892790ce205634f6a7c83de2b9e52ab5284", "file_loc": "{'base_commit': '9fa57892790ce205634f6a7c83de2b9e52ab5284', 'files': [{'path': 'youtube_dl/extractor/uplynk.py', 'status': 'modified', 'Loc': {\"('UplynkIE', None, 13)\": {'add': [51], 'mod': [29, 30]}, \"('UplynkIE', '_real_extract', 52)\": {'mod': [53]}, \"('UplynkPreplayIE', '_real_extract', 59)\": {'mod': [64]}}}, {'path': 'youtube_dl/extractor/viceland.py', 'status': 'modified', 'Loc': {\"('VicelandIE', None, 20)\": {'add': [27]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/uplynk.py", "youtube_dl/extractor/viceland.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "5dbe81a1d35ae704b5ea208698a6bb785923d71a", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/8171", "iss_label": "", "title": "Vimeo ondemand download preiview only ", "body": "I try to download a video from vimeo on demand but i only get the preview of. \ncould someone help me plaese \n\n youtube-dl -u .com -p https://vimeo.com/ondemand/thelastcolony/150274832 --verbo\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'-u', u'PRIVATE', u'-p', u'PRIVATE', u'https://vimeo.com/ondemand/thelastcolony/150274832', u'--verbo']\n[debug] Encodings: locale cp1252, fs mbcs, out cp437, pref cp1252\n[debug] youtube-dl version 2016.01.01\n[debug] Python version 2.7.10 - Windows-8-6.2.9200\n[debug] exe versions: none\n[debug] Proxy map: {}\n[vimeo] Logging in\n[vimeo] 150274832: Downloading webpage\n[vimeo] 150274832: Extracting information\n[vimeo] 150274832: Downloading webpage\n[vimeo] 150274832: Downloading JSON metadata\n[vimeo] 150274832: Downloading m3u8 information\n[debug] Invoking downloader on u'https://01-lvl3-gcs-pdl.vimeocdn.com/vimeo-prod-skyfire-std-us/01/54/6/150274832/459356950.mp4?expires=1452223995&token=00c3c5830ebe84f9310d4'\n[download] The Last Colony-150274832.mp4 has already been downloaded\n[download] 100% of 119.97MiB\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/5dbe81a1d35ae704b5ea208698a6bb785923d71a", "file_loc": "{'base_commit': '5dbe81a1d35ae704b5ea208698a6bb785923d71a', 'files': [{'path': 'youtube_dl/extractor/vimeo.py', 'status': 'modified', 'Loc': {\"('VimeoIE', '_real_extract', 264)\": {'add': [356], 'mod': [265, 267, 269, 345]}, \"('VimeoIE', '_extract_vimeo_url', 214)\": {'mod': [220]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/vimeo.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "365343131d752bece96d2279a3e0bcd7e9f0000f", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/17728", "iss_label": "", "title": "[PluralSight] Unable to download captions JSON: HTTP Error 404: Not Found", "body": "last version I tested\r\nyoutube-dl is up-to-date (2018.09.26)\r\n\r\nI'm try to download video from PluralSight. Video is ok but subtitle is cannot download. The error is \r\nWARNING: Unable to download captions JSON: HTTP Error 404: Not Found\r\nMy command: \r\n\r\nyoutube-dl --username xxx --password xxxx --sleep-interval 35 --max-sleep-interval 120 --sub-lang en --sub-format srt --write-sub https://app.pluralsight.com/library/courses/design-database-structure-sql-server-2014-70-465/table-of-contents\r\n\r\nThanks.", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/365343131d752bece96d2279a3e0bcd7e9f0000f", "file_loc": "{'base_commit': '365343131d752bece96d2279a3e0bcd7e9f0000f', 'files': [{'path': 'youtube_dl/extractor/pluralsight.py', 'status': 'modified', 'Loc': {\"('PluralsightIE', None, 112)\": {'mod': [213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224]}, \"('PluralsightIE', '_real_extract', 271)\": {'mod': [416]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/pluralsight.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "50f84a9ae171233c08ada41e03f6555c5ed95236", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/7427", "iss_label": "", "title": "\"ERROR: Signature extraction failed\" for youtube video", "body": "Hi,\n\nI have encountered an error with a video:\nhttps://www.youtube.com/watch?v=LDvVYqUMuJ0\n\n```\n$ /tmp/ydl/youtube-dl/youtube-dl https://www.youtube.com/watch?v=LDvVYqUMuJ0 --verbose\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'https://www.youtube.com/watch?v=LDvVYqUMuJ0', u'--verbose']\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\n[debug] youtube-dl version 2015.11.02\n[debug] Python version 2.7.10+ - Linux-4.2.0-1-amd64-x86_64-with-debian-stretch-sid\n[debug] exe versions: ffmpeg 1.0.6, ffprobe 1.0.6, rtmpdump 2.4\n[debug] Proxy map: {}\n[youtube] LDvVYqUMuJ0: Downloading webpage\n[youtube] LDvVYqUMuJ0: Downloading video info webpage\n[youtube] LDvVYqUMuJ0: Extracting video information\nWARNING: unable to extract html5 player; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n[youtube] {22} signature length 40.44, html5 player None\nERROR: Signature extraction failed: Traceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n (caused by ExtractorError(u\"Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\",)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\nTraceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\nTraceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/YoutubeDL.py\", line 661, in extract_info\n ie_result = ie.extract(url)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/common.py\", line 290, in extract\n return self._real_extract(url)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 1345, in _real_extract\n encrypted_sig, video_id, player_url, age_gate)\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 827, in _decrypt_signature\n 'Signature extraction failed: ' + tb, cause=e)\nExtractorError: Signature extraction failed: Traceback (most recent call last):\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 817, in _decrypt_signature\n video_id, player_url, s\n File \"/tmp/ydl/youtube-dl/youtube-dl/youtube_dl/extractor/youtube.py\", line 709, in _extract_signature_function\n raise ExtractorError('Cannot identify player %r' % player_url)\nExtractorError: Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n (caused by ExtractorError(u\"Cannot identify player u'https://s.ytimg.com/yts/jsbin/player-en_US-vfljDEtYP/base.js'; please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\",)); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; type youtube-dl -U to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\n```\n\nThanks,\nCorey\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/50f84a9ae171233c08ada41e03f6555c5ed95236", "file_loc": "{'base_commit': '50f84a9ae171233c08ada41e03f6555c5ed95236', 'files': [{'path': 'youtube_dl/extractor/youtube.py', 'status': 'modified', 'Loc': {\"('YoutubeIE', '_extract_signature_function', 704)\": {'mod': [706]}, \"('YoutubeIE', '_real_extract', 1008)\": {'mod': [1346]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16139", "iss_label": "fixed", "title": "ITV BTCC videos support?", "body": "## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.04.09*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [*] I've **verified** and **I assure** that I'm running youtube-dl **2018.04.09**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [*] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [*] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [*] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [*] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [ ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v `), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n$ youtube-dl -v https://pastebin.com/raw/KxD6rhpF --geo-bypass-country UK\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Custom config: []\r\n[debug] Command-line args: [u'-v', u'https://pastebin.com/raw/KxD6rhpF', u'--geo-bypass-country', u'UK']\r\n[debug] Encodings: locale UTF-8, fs UTF-8, out UTF-8, pref UTF-8\r\n[debug] youtube-dl version 2018.04.09\r\n[debug] Python version 2.7.13 (CPython) - Linux-4.9.62-v7+-armv7l-with-debian-9.4\r\n[debug] exe versions: ffmpeg 3.2.10-1, ffprobe 3.2.10-1\r\n[debug] Proxy map: {}\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[generic] KxD6rhpF: Requesting header\r\nWARNING: Falling back on generic information extractor.\r\n[generic] KxD6rhpF: Downloading webpage\r\n[generic] KxD6rhpF: Extracting information\r\n[download] Downloading playlist: Brightcove video tester\r\n[generic] playlist Brightcove video tester: Collected 1 video ids (downloading 1 of them)\r\n[download] Downloading video 1 of 1\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[debug] Using fake IP None (UK) as X-Forwarded-For.\r\n[brightcove:new] 5766870719001: Downloading webpage\r\n[brightcove:new] 5766870719001: Downloading JSON metadata\r\nERROR: Access to this resource is forbidden by access policy.\r\nYou might want to use a VPN or a proxy server (with --proxy) to workaround.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/brightcove.py\", line 706, in _real_extract\r\n json_data = self._download_json(api_url, video_id, headers=headers)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 692, in _download_json\r\n encoding=encoding, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 634, in _download_webpage\r\n res = self._download_webpage_handle(url_or_request, video_id, note, errnote, fatal, encoding=encoding, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/adobepass.py\", line 1332, in _download_webpage_handle\r\n *args, **compat_kwargs(kwargs))\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 539, in _download_webpage_handle\r\n urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 528, in _request_webpage\r\n raise ExtractorError(errmsg, sys.exc_info()[2], cause=err)\r\nExtractorError: Unable to download JSON metadata: HTTP Error 403: Forbidden (caused by HTTPError()); please report this issue on https://yt-dl.org/bug . Make sure you are using the latest version; see https://yt-dl.org/update on how to update. Be sure to call youtube-dl with the --verbose flag and include its complete output.\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/YoutubeDL.py\", line 789, in extract_info\r\n ie_result = ie.extract(url)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 440, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/brightcove.py\", line 712, in _real_extract\r\n self.raise_geo_restricted(msg=message)\r\n File \"/usr/local/lib/python2.7/dist-packages/youtube_dl/extractor/common.py\", line 743, in raise_geo_restricted\r\n raise GeoRestrictedError(msg, countries=countries)\r\nGeoRestrictedError: Access to this resource is forbidden by access policy.\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n\r\nITV separated the BTCC race videos from the hub (which also seems to be having issues as per https://github.com/rg3/youtube-dl/issues/15925)\r\nLately the video are hosted at http://www.itv.com/btcc/races (ie for a particular weekend all videos are posted at individual pages like: http://www.itv.com/btcc/races/btcc-2018-all-the-action-from-brands-hatch)\r\n\r\nSkimming the source code of this sample weekend page, I extracted the vid params and built a test page:\r\n- Single video: https://pastebin.com/raw/KxD6rhpF\r\n\r\n**Question 1:** is the log error above pointing just to a geo restriction error or is there anything else involved that I missed? (ie: like writing some header to force the ITV scrapper to act instead of a generic one)\r\n\r\n```\r\n\r\n \r\n\r\n \r\n Brightcove video tester\r\n \r\n \r\n\r\n\r\n\r\n\t\r\n\r\n\t\r\n\r\n\r\n```\r\n\r\n**Question 2:** is there any way to generate a playlist of downloadable items based on pages like http://www.itv.com/btcc/races/btcc-2018-all-the-action-from-brands-hatch with youtube-dl?", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/ytdl-org/youtube-dl/commit/ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd", "file_loc": "{'base_commit': 'ea1f5e5dbd6c58d4f0872a65b97611732f4b29bd', 'files': [{'path': 'youtube_dl/extractor/extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [480]}}}, {'path': 'youtube_dl/extractor/itv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 20]}, \"('ITVIE', '_real_extract', 56)\": {'add': [262]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/itv.py", "youtube_dl/extractor/extractors.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "localstack", "repo_name": "localstack", "base_commit": "78f635ad3a8f819645f3991dfd244ff09f06a7f0", "iss_html_url": "https://github.com/localstack/localstack/issues/8833", "iss_label": "type: bug\naws:cloudformation\nstatus: backlog", "title": "bug: CDK Table build with replicationRegions failing on latest", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nTrying to deploy a [CDK Table](https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_dynamodb.Table.html) to a localstack environment results in the following:\r\n\r\n```\r\nstack | 0/3 | 8:56:58 PM | CREATE_FAILED | AWS::CloudFormation::Stack | stack Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once\r\n```\r\n\r\n```\r\nlocalstack | 2023-08-06T03:56:28.709 DEBUG --- [ asgi_gw_4] localstack.packages.api : Installation of dynamodb-local skipped (already installed).\r\nlocalstack | 2023-08-06T03:56:28.709 DEBUG --- [ asgi_gw_4] l.services.dynamodb.server : Starting DynamoDB Local: ['java', '-Xmx256m', '-javaagent:/usr/lib/localstack/dynamodb-local/latest/ddb-local-loader-0.1.jar', '-Djava.library.path=/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal_lib', '-jar', '/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal.jar', '-port', '35799', '-dbPath', '/var/lib/localstack/tmp/state/dynamodb']\r\nlocalstack | 2023-08-06T03:56:28.710 DEBUG --- [uncthread160] localstack.utils.run : Executing command: ['java', '-Xmx256m', '-javaagent:/usr/lib/localstack/dynamodb-local/latest/ddb-local-loader-0.1.jar', '-Djava.library.path=/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal_lib', '-jar', '/usr/lib/localstack/dynamodb-local/latest/DynamoDBLocal.jar', '-port', '35799', '-dbPath', '/var/lib/localstack/tmp/state/dynamodb']\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : Initializing DynamoDB Local with the following configuration:\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : Port:\t35799\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : InMemory:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.939 DEBUG --- [uncthread160] l.services.dynamodb.server : DbPath:\t/var/lib/localstack/tmp/state/dynamodb\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : SharedDb:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : shouldDelayTransientStatuses:\tfalse\r\nlocalstack | 2023-08-06T03:56:28.940 DEBUG --- [uncthread160] l.services.dynamodb.server : CorsParams:\tnull\r\nlocalstack | 2023-08-06T03:56:28.950 DEBUG --- [uncthread160] l.services.dynamodb.server :\r\nlocalstack | 2023-08-06T03:56:29.715 INFO --- [ asgi_gw_4] botocore.credentials : Found credentials in environment variables.\r\nlocalstack | 2023-08-06T03:56:30.532 DEBUG --- [ asgi_gw_4] l.services.plugins : checking service health dynamodb:4566\r\nlocalstack | 2023-08-06T03:56:30.534 INFO --- [ asgi_gw_4] localstack.utils.bootstrap : Execution of \"require\" took 1887.18ms\r\nlocalstack | 2023-08-06T03:56:30.879 DEBUG --- [ asgi_gw_1] l.services.plugins : checking service health kinesis:4566\r\nlocalstack | 2023-08-06T03:56:30.886 INFO --- [ asgi_gw_1] l.s.k.kinesis_mock_server : Creating kinesis backend for account 000000000000\r\nlocalstack | 2023-08-06T03:56:30.887 DEBUG --- [ asgi_gw_1] localstack.packages.api : Starting installation of kinesis-local...\r\nlocalstack | 2023-08-06T03:56:30.887 DEBUG --- [ asgi_gw_1] localstack.utils.run : Executing command: ['npm', 'install', '--prefix', '/var/lib/localstack/lib/kinesis-local/0.4.2', 'kinesis-local@0.4.2']\r\nlocalstack | 2023-08-06T03:56:32.575 DEBUG --- [ asgi_gw_1] localstack.packages.core : Setting ownership root:root on /var/lib/localstack/lib/kinesis-local/0.4.2\r\nlocalstack | 2023-08-06T03:56:32.575 DEBUG --- [ asgi_gw_1] localstack.packages.api : Installation of kinesis-local finished.\r\nlocalstack | 2023-08-06T03:56:32.576 DEBUG --- [ asgi_gw_1] l.s.k.kinesis_mock_server : starting kinesis process ['node', PosixPath('/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/main.js')] with env vars {'KINESIS_MOCK_CERT_PATH': '/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/server.json', 'KINESIS_MOCK_PLAIN_PORT': 42209, 'KINESIS_MOCK_TLS_PORT': 34279, 'SHARD_LIMIT': '100', 'ON_DEMAND_STREAM_COUNT_LIMIT': '10', 'AWS_ACCOUNT_ID': '000000000000', 'CREATE_STREAM_DURATION': '500ms', 'DELETE_STREAM_DURATION': '500ms', 'REGISTER_STREAM_CONSUMER_DURATION': '500ms', 'START_STREAM_ENCRYPTION_DURATION': '500ms', 'STOP_STREAM_ENCRYPTION_DURATION': '500ms', 'DEREGISTER_STREAM_CONSUMER_DURATION': '500ms', 'MERGE_SHARDS_DURATION': '500ms', 'SPLIT_SHARD_DURATION': '500ms', 'UPDATE_SHARD_COUNT_DURATION': '500ms', 'UPDATE_STREAM_MODE_DURATION': '500ms', 'SHOULD_PERSIST_DATA': 'true', 'PERSIST_PATH': '../../../var/lib/localstack/tmp/state/kinesis', 'PERSIST_FILE_NAME': '000000000000.json', 'PERSIST_INTERVAL': '5s', 'LOG_LEVEL': 'INFO'}\r\nlocalstack | 2023-08-06T03:56:32.576 DEBUG --- [uncthread166] localstack.utils.run : Executing command: ['node', PosixPath('/var/lib/localstack/lib/kinesis-local/0.4.2/node_modules/kinesis-local/main.js')]\r\nlocalstack | 2023-08-06T03:56:32.834 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.823005Z contextId=6956dd23-c61e-4aa1-80ce-b8bfc8d0894b, cacheConfig={\"awsAccountId\":\"000000000000\",\"awsRegion\":\"us-east-1\",\"createStreamDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"deleteStreamDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"deregisterStreamConsumerDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"initializeStreams\":null,\"logLevel\":\"INFO\",\"mergeShardsDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"onDemandStreamCountLimit\":10,\"persistConfig\":{\"fileName\":\"000000000000.json\",\"interval\":{\"length\":5,\"unit\":\"SECONDS\"},\"loadIfExists\":true,\"path\":\"../../../var/lib/localstack/tmp/state/kinesis\",\"shouldPersist\":true},\"registerStreamConsumerDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"shardLimit\":100,\"splitShardDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"startStreamEncryptionDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"stopStreamEncryptionDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"},\"updateShardCountDuration\":{\"length\":500,\"unit\":\"MILLISECONDS\"}} Logging Cache Config\r\nlocalstack | 2023-08-06T03:56:32.986 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.986197Z Starting Kinesis TLS Mock Service on port 34279\r\nlocalstack | 2023-08-06T03:56:32.987 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.986862Z Starting Kinesis Plain Mock Service on port 42209\r\nlocalstack | 2023-08-06T03:56:32.994 INFO --- [uncthread166] l.s.k.kinesis_mock_server : [info] kinesis.mock.KinesisMockService$ 2023-08-06T03:56:32.994001Z contextId=1d81ef53-1648-4fbc-8b16-f09375d77ece Starting persist data loop\r\nlocalstack | 2023-08-06T03:56:33.215 DEBUG --- [uncthread158] l.s.c.resource_provider : Executing callback method for AWS::DynamoDB::Table:ddbFooTabletable735E488F\r\nlocalstack | 2023-08-06T03:56:33.330 DEBUG --- [uncthread158] l.s.c.e.template_deployer : Error applying changes for CloudFormation stack \"stack-ddbFooTableNestedStackdd-f1d922ad\": 'NoneType' object has no attribute 'get' Traceback (most recent call last):\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 965, in _run\r\nlocalstack | self.do_apply_changes_in_loop(changes, stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1011, in do_apply_changes_in_loop\r\nlocalstack | should_deploy = self.prepare_should_deploy_change(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1105, in prepare_should_deploy_change\r\nlocalstack | resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 478, in _resolve_refs_recursively\r\nlocalstack | value[key] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 478, in _resolve_refs_recursively\r\nlocalstack | value[key] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 497, in _resolve_refs_recursively\r\nlocalstack | value[i] = resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack_ext/services/cloudformation/cloudformation_extended.py.enc\", line 35, in resolve_refs_recursively\r\nlocalstack | A=resolve_refs_recursively_orig(*E,**F)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 178, in resolve_refs_recursively\r\nlocalstack | result = _resolve_refs_recursively(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/utils/functions.py\", line 80, in func\r\nlocalstack | return wrapped(*args, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 290, in _resolve_refs_recursively\r\nlocalstack | resolved_getatt = get_attr_from_model_instance(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 102, in get_attr_from_model_instance\r\nlocalstack | attribute = attribute.get(part)\r\nlocalstack | AttributeError: 'NoneType' object has no attribute 'get'\r\nlocalstack |\r\nlocalstack |\r\nlocalstack | 2023-08-06T03:56:33.426 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:33.445 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:38.447 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:38.458 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:43.464 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:43.473 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:48.479 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:48.489 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:53.495 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:53.502 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:58.487 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:56:58.495 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:56:58.743 DEBUG --- [uncthread154] l.s.c.e.template_deployer : Error applying changes for CloudFormation stack \"stack\": Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once Traceback (most recent call last):\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 965, in _run\r\nlocalstack | self.do_apply_changes_in_loop(changes, stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1039, in do_apply_changes_in_loop\r\nlocalstack | self.apply_change(change, stack=stack)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/engine/template_deployer.py\", line 1152, in apply_change\r\nlocalstack | progress_event = executor.deploy_loop(resource_provider_payload) # noqa\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 572, in deploy_loop\r\nlocalstack | event = self.execute_action(resource_provider, payload)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 638, in execute_action\r\nlocalstack | return resource_provider.create(request)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 350, in create\r\nlocalstack | return self.create_or_delete(request)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/resource_provider.py\", line 499, in create_or_delete\r\nlocalstack | result_handler(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/localstack/services/cloudformation/models/cloudformation.py\", line 55, in _handle_result\r\nlocalstack | connect_to().cloudformation.get_waiter(\"stack_create_complete\").wait(\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/waiter.py\", line 55, in wait\r\nlocalstack | Waiter.wait(self, **kwargs)\r\nlocalstack | File \"/opt/code/localstack/.venv/lib/python3.10/site-packages/botocore/waiter.py\", line 375, in wait\r\nlocalstack | raise WaiterError(\r\nlocalstack | botocore.exceptions.WaiterError: Waiter StackCreateComplete failed: Waiter encountered a terminal failure state: For expression \"Stacks[].StackStatus\" we matched expected path: \"CREATE_FAILED\" at least once\r\nlocalstack |\r\nlocalstack |\r\nlocalstack | 2023-08-06T03:57:03.504 INFO --- [ asgi_gw_3] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\nlocalstack | 2023-08-06T03:57:03.511 INFO --- [ asgi_gw_2] localstack.request.aws : AWS cloudformation.DescribeStacks => 200\r\nlocalstack | 2023-08-06T03:57:03.520 INFO --- [ asgi_gw_4] localstack.request.aws : AWS cloudformation.DescribeStackEvents => 200\r\n```\r\n\r\nI am using the CDK Table class with a stream enabled, which requires replicas to enable the stream. There's no creative way around this issue that I'm aware of, due to the stack failing before the fully deployment of the Table class.\n\n### Expected Behavior\n\nI'm expecting the build to succeed locally, so I can continue with our stack deployment chain for our local environments.\r\n\r\nThe behavior is not present on `localstack/localstack-pro:2.2.0` or any release prior to the update this week. I am successfully able to deploy to live environments with the same CDK stack without error.\n\n### How are you starting LocalStack?\n\nWith a docker-compose file\n\n### Steps To Reproduce\n\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\ndocker-compose up\r\n \r\n```\r\nversion: \"3.8\"\r\n\r\nnetworks:\r\n ls:\r\n name: ls\r\n\r\nservices:\r\n localstack:\r\n container_name: \"${LOCALSTACK_DOCKER_NAME-localstack}\"\r\n environment:\r\n - DEBUG=${DEBUG-1}\r\n - DISABLE_CORS_CHECKS=1\r\n - DISABLE_CUSTOM_CORS_APIGATEWAY=1\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n - EXTRA_CORS_ALLOWED_ORIGINS=*\r\n - MAIN_DOCKER_NETWORK=ls\r\n - PERSISTENCE=${PERSISTENCE-}\r\n env_file:\r\n - ./localstack.local.env\r\n image: \"localstack/localstack-pro:${LOCALSTACK_VERSION-latest}\"\r\n networks:\r\n - ls\r\n ports:\r\n - \"127.0.0.1:4566:4566\" # LocalStack Gateway\r\n - \"127.0.0.1:4510-4559:4510-4559\" # external services port range\r\n - \"127.0.0.1:53:53\" # DNS config (required for Pro)\r\n - \"127.0.0.1:53:53/udp\" # DNS config (required for Pro)\r\n - \"127.0.0.1:443:443\" # LocalStack HTTPS Gateway (required for Pro)\r\n volumes:\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n\r\n```\r\n\r\n.env file contains:\r\n\r\n```\r\nLOCALSTACK_API_KEY=xxxxxxxxxx\r\n```\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\nnpx cdklocal deploy api\r\n\r\n```\r\nconst account = process.env.CDK_ACCOUNT || \"000000000000\";\r\nconst region = \"us-west-2\";\r\n\r\nnew APIStack(app, \"api\", {\r\n crossRegionReferences: true,\r\n env: { account, region },\r\n stackName: \"api\",\r\n});\r\n```\r\n\r\n```\r\nnew DynamoStack(this, \"ddbFooTable\", {\r\n billingMode: BillingMode.PROVISIONED,\r\n deletionProtection: false,\r\n encryption: TableEncryption.AWS_MANAGED,\r\n partitionKey: { name: \"id\", type: AttributeType.STRING },\r\n pointInTimeRecovery: false,\r\n removalPolicy: false\r\n ? RemovalPolicy.RETAIN\r\n : RemovalPolicy.DESTROY,\r\n replicationRegions: [\"us-west-1\"],\r\n stream: StreamViewType.NEW_AND_OLD_IMAGES,\r\n tableName: \"foo\",\r\n});\r\n```\r\n\n\n### Environment\n\n```markdown\n- OS: OSX 13.4\r\n- LocalStack: latest (70e077bf43491cc0954698c1240159caa9cecc0ac6652b890b52aaf0801d5fcb)\r\n- aws-cdk-lib: 2.90.0\r\n- aws-cdk-local: 2.18.0\n```\n\n\n### Anything else?\n\nUnfortunately, I'm in a position where I need the latest update due to Cognito User Pool domains not being functional in prior releases. I'm trying to get our devs to authenticate locally with an Oauth2 IDP flow instead of forcing them to authenticate with a live-stable environment. More information [here](https://github.com/localstack/localstack/issues/8700).", "code": null, "pr_html_url": "https://github.com/localstack/localstack/pull/8882", "commit_html_url": null, "file_loc": "{'base_commit': '78f635ad3a8f819645f3991dfd244ff09f06a7f0', 'files': [{'path': 'localstack/services/cloudformation/engine/template_deployer.py', 'status': 'modified', 'Loc': {\"(None, 'get_attr_from_model_instance', 75)\": {'add': [100, 101]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/cloudformation/engine/template_deployer.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "localstack", "repo_name": "localstack", "base_commit": "f4a188b6d51155a0831a3246f1d8e4f4be835861", "iss_html_url": "https://github.com/localstack/localstack/issues/4652", "iss_label": "type: bug\nstatus: triage needed", "title": "bug: LAMBDA_DOCKER_FLAGS doesn't work with -e", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nAttempting to add environment variables to containers created to service lambda requests no longer works in the latest version of localstack. This works in localStack version 0.12.16\r\n\r\n### Expected Behavior\r\n\r\nSetting the environment variable `LAMBDA_DOCKER_FLAGS=-e TEST_VAL=True` on localstack's docker container will result in spawned containers created for serving lambda functions having the environment variable TEST_VAL set to True.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n1. Run `docker-compose up -d` with the following docker-compose.yml\r\n```yaml\r\nversion: '2.1'\r\n\r\nservices:\r\n localstack_ltest:\r\n container_name: \"ltest\"\r\n image: localstack/localstack:0.12.18\r\n ports:\r\n - \"4566:4566\"\r\n environment:\r\n - DOCKER_HOST=unix:///var/run/docker.sock\r\n - LOCALSTACK_API_KEY=\r\n - LAMBDA_EXECUTOR=docker-reuse\r\n - LAMBDA_DOCKER_FLAGS=-e TEST_VAL=True\r\n volumes:\r\n - \"/var/run/docker.sock:/var/run/docker.sock\"\r\n restart: always\r\n```\r\n2. Create the file logs-template.yml\r\n```yaml\r\n---\r\nAWSTemplateFormatVersion: '2010-09-09'\r\nResources: \r\n LambdaFunctionLogGroup:\r\n Type: AWS::Logs::LogGroup\r\n Properties: \r\n RetentionInDays: 60\r\n LogGroupName: !Join [\"\", [\"/aws/lambda/\", !Ref LambdaFunction]]\r\n LambdaFunctionRole:\r\n Type: AWS::IAM::Role\r\n Properties:\r\n AssumeRolePolicyDocument:\r\n Version: '2012-10-17'\r\n Statement:\r\n - Effect: Allow\r\n Principal:\r\n Service:\r\n - lambda.amazonaws.com\r\n Action:\r\n - sts:AssumeRole\r\n Path: /\r\n Policies:\r\n - PolicyName: LambdaRolePolicy\r\n PolicyDocument:\r\n Statement:\r\n - Effect: Allow\r\n Action:\r\n - 'logs:*'\r\n Resource: 'arn:aws:logs:*:*:*'\r\n LambdaFunction:\r\n Type: AWS::Lambda::Function\r\n Properties:\r\n FunctionName: \"test-function\"\r\n Role: !GetAtt LambdaFunctionRole.Arn\r\n Handler: index.lambda_handler\r\n Runtime: python3.8\r\n Code:\r\n ZipFile: |\r\n import os\r\n def lambda_handler(event, context):\r\n print(\"environ: \" + str(os.environ))\r\n\r\n\r\n```\r\n3. Run ` aws cloudformation deploy --stack-name test --template-file .\\logs-template.yml --endpoint-url http://127.0.0.1:4566 --region us-east-1`\r\n4. Run `aws --endpoint-url http://127.0.0.1:4566 --region us-east-1 lambda invoke --function-name test-function out.txt`\r\n5. Run `aws --endpoint-url=http://localhost:4566 --region us-east-1 logs tail /aws/lambda/test-function`\r\n6. Check for `\"TEST_VAL\": \"True\",` being in the output of the above command.\r\n\r\n### Environment\r\n\r\n```markdown\r\nCurrent configuration (broken)\r\n- OS: Ubuntu 20.04\r\n- LocalStack version: 0.12.18\r\n- LocalStack build date: 2021-09-27\r\n- LocalStack build git hash: 00797f9e\r\n\r\nWorking configuration\r\n- OS: Ubuntu 20.04\r\n- LocalStack version: 0.12.16\r\n- LocalStack Docker container id: b0137bad2045\r\n- LocalStack build date: 2021-07-31\r\n- LocalStack build git hash: f1262f74\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/localstack/localstack/commit/f4a188b6d51155a0831a3246f1d8e4f4be835861", "file_loc": "{'base_commit': 'f4a188b6d51155a0831a3246f1d8e4f4be835861', 'files': [{'path': 'localstack/services/awslambda/lambda_executors.py', 'status': 'modified', 'Loc': {\"('LambdaExecutorContainers', 'run_lambda_executor', 495)\": {'mod': [543, 544]}}}, {'path': 'tests/integration/test_lambda.py', 'status': 'modified', 'Loc': {\"('TestLambdaBaseFeatures', 'test_large_payloads', 477)\": {'add': [492]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/awslambda/lambda_executors.py"], "doc": [], "test": ["tests/integration/test_lambda.py"], "config": [], "asset": []}} +{"organization": "localstack", "repo_name": "localstack", "base_commit": "dec1ba1b94153a4380cc94a0c8bd805f8922b6e3", "iss_html_url": "https://github.com/localstack/localstack/issues/3202", "iss_label": "status: triage needed\narea: configuration", "title": "Illegal path is passed into the HEAD request during the object download", "body": "# Type of request: This is a ...\n[ ] bug report\n[ ] feature request\n\n# Detailed description\nI have tried to upgrade localstack to 0.11.5(and after) and use the service port 4566.\nOn local, I got passes to all tests we have been doing.\nBut on CircleCI, I got errors when download object from s3.\n\n```\nAn error occurred (404) when calling the HeadObject operation: Not Found\n```\n\nIt goes through to 0.11.4, so I'm sure it's a bug, but what do you think?\nWould you have any advice?\n\n## Expected behavior\nMy test does as below\n\n1. upload object to s3 bucket named \"test-bucket\"\n1. list v2 for the bucket\n1. download it\n\nSo, I expected to succeed to this.\nOf course, this test has been passed until localstack upgrade.\n\n## Actual behavior\nAt CircleCI, I got below.\nSomehow \"/test-bucket/test-bucket\" is being passed as a HEAD request parameter at download time.\nThis path is double of bucket name \"/test-bucket\".\n\n```\n:\n2020-10-30 05:47:54,523:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"PUT /test-bucket HTTP/1.1\" 200 -\n2020-10-30 05:47:54,536:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"PUT /test-bucket/loadable/2020/04/06/000000_2_e462a109-916a-4b8f-b393-f5b01a6a8c10 HTTP/1.1\" 200 -\n2020-10-30 05:47:54,554:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"GET /test-bucket?list-type=2&max-keys=200&prefix=loadable%2F HTTP/1.1\" 200 -\n2020-10-30 05:47:54,566:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"GET /test-bucket/loadable/2020/04/06/000000_2_e462a109-916a-4b8f-b393-f5b01a6a8c10 HTTP/1.1\" 206 -\n2020-10-30 05:47:54,578:API: 172.20.0.5 - - [30/Oct/2020 05:47:54] \"HEAD /test-bucket/test-bucket HTTP/1.1\" 404 -\n2020-10-30T05:47:54:WARNING:bootstrap.py: Thread run method ._run at 0x7f333a234f70>(None) failed: An error occurred (404) when calling the HeadObject operation: Not Found Traceback (most recent call last):\n File \"/opt/code/localstack/localstack/utils/bootstrap.py\", line 534, in run\n result = self.func(self.params)\n File \"/opt/code/localstack/localstack/utils/async_utils.py\", line 28, in _run\n return fn(*args, **kwargs)\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 560, in handler\n response = modify_and_forward(method=method, path=path_with_params, data_bytes=data, headers=headers,\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 333, in modify_and_forward\n listener_result = listener.forward_request(method=method,\n File \"/opt/code/localstack/localstack/services/edge.py\", line 81, in forward_request\n return do_forward_request(api, port, method, path, data, headers)\n File \"/opt/code/localstack/localstack/services/edge.py\", line 86, in do_forward_request\n result = do_forward_request_inmem(api, port, method, path, data, headers)\n File \"/opt/code/localstack/localstack/services/edge.py\", line 106, in do_forward_request_inmem\n response = modify_and_forward(method=method, path=path, data_bytes=data, headers=headers,\n File \"/opt/code/localstack/localstack/services/generic_proxy.py\", line 401, in modify_and_forward\n updated_response = update_listener.return_response(**kwargs)\n File \"/opt/code/localstack/localstack/services/s3/s3_listener.py\", line 1254, in return_response\n fix_range_content_type(bucket_name, path, headers, response)\n File \"/opt/code/localstack/localstack/services/s3/s3_listener.py\", line 465, in fix_range_content_type\n result = s3_client.head_object(Bucket=bucket_name, Key=key_name)\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 357, in _api_call\n return self._make_api_call(operation_name, kwargs)\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 676, in _make_api_call\n raise error_class(parsed_response, operation_name)\nbotocore.exceptions.ClientError: An error occurred (404) when calling the HeadObject operation: Not Found\n```\n\n# Steps to reproduce\n## Command used to start LocalStack\nsorry...\n\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\nsorry...\n\n\n\n\u2506Issue is synchronized with this [Jira Task](https://localstack.atlassian.net/browse/LOC-71) by [Unito](https://www.unito.io/learn-more)\n", "code": null, "pr_html_url": "https://github.com/localstack/localstack/pull/3370", "commit_html_url": null, "file_loc": "{'base_commit': 'dec1ba1b94153a4380cc94a0c8bd805f8922b6e3', 'files': [{'path': 'localstack/services/s3/s3_listener.py', 'status': 'modified', 'Loc': {\"(None, 'uses_path_addressing', 891)\": {'mod': [892]}}}, {'path': 'tests/integration/test_s3.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [36]}, \"('S3ListenerTest', None, 59)\": {'add': [1454]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["localstack/services/s3/s3_listener.py"], "doc": [], "test": ["tests/integration/test_s3.py"], "config": [], "asset": []}} +{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "7a53ba3dad9f7b2e31dac3fbb3162838eb9441c6", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/58", "iss_label": "", "title": "Please support Azure OpenAI", "body": null, "code": null, "pr_html_url": "https://github.com/openinterpreter/open-interpreter/pull/62", "commit_html_url": null, "file_loc": "{'base_commit': '7a53ba3dad9f7b2e31dac3fbb3162838eb9441c6', 'files': [{'path': 'interpreter/cli.py', 'status': 'modified', 'Loc': {\"(None, 'cli', 4)\": {'add': [28, 39]}}}, {'path': 'interpreter/interpreter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [52]}, \"('Interpreter', '__init__', 62)\": {'add': [69]}, \"('Interpreter', 'verify_api_key', 255)\": {'add': [263], 'mod': [260, 262, 271, 272, 275, 276, 278, 279, 280, 281, 284, 285, 286, 287, 289]}, \"('Interpreter', 'respond', 296)\": {'add': [340], 'mod': [312, 313, 314, 315, 316, 317, 318]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/cli.py", "interpreter/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "e69269d844b7089dec636516d6edb4f70911ebf6", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/54", "iss_label": "", "title": "Support OPENAI_ API_ BASE for proxy URLs", "body": "How to add OPENAI_ API_ BASE code to use other proxy keys\uff1f", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/abi/screenshot-to-code/commit/e69269d844b7089dec636516d6edb4f70911ebf6", "file_loc": "{'base_commit': 'e69269d844b7089dec636516d6edb4f70911ebf6', 'files': [{'path': 'backend/image_generation.py', 'status': 'modified', 'Loc': {\"(None, 'process_tasks', 8)\": {'mod': [8, 9]}, \"(None, 'generate_image', 23)\": {'mod': [23, 24]}, \"(None, 'generate_images', 63)\": {'mod': [63, 90]}}}, {'path': 'backend/llm.py', 'status': 'modified', 'Loc': {\"(None, 'stream_openai_response', 8)\": {'mod': [9, 11]}}}, {'path': 'backend/main.py', 'status': 'modified', 'Loc': {\"(None, 'stream_code_test', 62)\": {'add': [75, 85, 119], 'mod': [132]}}}, {'path': 'frontend/src/App.tsx', 'status': 'modified', 'Loc': {'(None, None, 39)': {'add': [39]}}}, {'path': 'frontend/src/components/SettingsDialog.tsx', 'status': 'modified', 'Loc': {'(None, None, 78)': {'add': [78]}}}, {'path': 'frontend/src/types.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/src/App.tsx", "frontend/src/types.ts", "backend/main.py", "frontend/src/components/SettingsDialog.tsx", "backend/image_generation.py", "backend/llm.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "d363cf4639aacdaefbb8f69919f3c787a4519b7b", "iss_html_url": "https://github.com/pytorch/pytorch/issues/38479", "iss_label": "triaged\nmodule: numpy", "title": "torch.einsum does not pass equation argument to __torch_function__ API", "body": "## \ud83d\udc1b Bug\r\n\r\nwhen delegating torch.einsum call to an object which implements\r\n`__torch_function__` API the equation argument is not passed resulting in the error.\r\n```TypeError: einsum(): argument 'equation' (position 1) must be str, not Tensor```\r\n\r\nthis was tested on pytorch 1.5.0\r\n\r\nI've actually found the cause of this bug and have written a fix.\r\n\r\nthe following script illustrates the problem and the proposed solution\r\n\r\n## To Reproduce\r\n\r\n```python \r\nimport torch\r\n\r\nclass Wrapper():\r\n def __init__(self,data):\r\n self.data = data\r\n \r\n def __torch_function__(self, func, types, args=(), kwargs=None):\r\n if kwargs is None:\r\n kwargs = {}\r\n\r\n #unwrap inputs if necessary\r\n def unwrap(v):\r\n return v.data if isinstance(v,Wrapper) else v\r\n args = map(unwrap,args)\r\n kwargs = {k:unwrap(v) for k,v in kwargs.items()}\r\n\r\n return func(*args, **kwargs)\r\n\r\n\r\n\r\n# fixed einsum implementation\r\nfrom torch import Tensor,_VF\r\nfrom torch._overrides import has_torch_function,handle_torch_function\r\ndef fixed_einsum(equation,*operands):\r\n if not torch.jit.is_scripting():\r\n if any(type(t) is not Tensor for t in operands) and has_torch_function(operands):\r\n # equation is not passed\r\n # return handle_torch_function(einsum, operands, *operands)\r\n return handle_torch_function(fixed_einsum, operands, equation,*operands)\r\n if len(operands) == 1 and isinstance(operands[0], (list, tuple)):\r\n # the old interface of passing the operands as one list argument\r\n operands = operands[0]\r\n\r\n # recurse incase operands contains value that has torch function\r\n #in the original implementation this line is omitted\r\n return fixed_einsum(equation,*operands)\r\n\r\n return _VF.einsum(equation, operands)\r\n\r\n\r\nif __name__ == \"__main__\":\r\n print(torch.__version__)\r\n # uncomment to use fixed einsum\r\n # torch.einsum = fixed_einsum\r\n\r\n #operands are wrapped\r\n x = Wrapper(torch.randn(5))\r\n y = Wrapper(torch.randn(4))\r\n assert torch.allclose(torch.einsum('i,j->ij',x, y),torch.ger(x,y)) # outer product\r\n print(\"works with wrapped inputs\") \r\n\r\n #old interface operands is a list\r\n a = Wrapper(torch.randn(2,3))\r\n b = Wrapper(torch.randn(5,3,7))\r\n c = Wrapper(torch.randn(2,7))\r\n assert torch.allclose(torch.einsum('ik,jkl,il->ij', [a, b, c]),torch.nn.functional.bilinear(a,c,b)) # bilinear interpolation\r\n print(\"works with old API operands is list\")\r\n \r\n #equation is wrapped\r\n As = Wrapper(torch.randn(3,2,5))\r\n Bs = Wrapper(torch.randn(3,5,4))\r\n equation = Wrapper('bij,bjk->bik')\r\n assert torch.allclose(torch.einsum(equation, As, Bs),torch.matmul(As,Bs)) # batch matrix multiplication\r\n print(\"works with equation wrapped\")\r\n\r\n #see that it also works with plain tensors\r\n x = torch.randn(5)\r\n y = torch.randn(4)\r\n assert torch.allclose(torch.einsum('i,j->ij',x, y),torch.ger(x,y)) \r\n print(\"works with no wrapped values\")\r\n\r\n\r\n\r\n```\r\n\r\n\r\ncc @albanD @mruberry", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/d363cf4639aacdaefbb8f69919f3c787a4519b7b", "file_loc": "{'base_commit': 'd363cf4639aacdaefbb8f69919f3c787a4519b7b', 'files': [{'path': 'test/test_overrides.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [490]}}}, {'path': 'torch/_overrides.py', 'status': 'modified', 'Loc': {\"(None, 'get_testing_overrides', 144)\": {'mod': [264]}}}, {'path': 'torch/functional.py', 'status': 'modified', 'Loc': {\"(None, 'einsum', 222)\": {'add': [300], 'mod': [297]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/functional.py", "torch/_overrides.py"], "doc": [], "test": ["test/test_overrides.py"], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "81a4aeabdf9d550ceda52a5060f19568de61b265", "iss_html_url": "https://github.com/pytorch/pytorch/issues/93667", "iss_label": "triaged\ntracker\noncall: pt2\nmodule: dynamo", "title": "14k github models on PyTorch 2.0 pass rates dashboard ", "body": "We are weekly running dynamo-eager, dynamo-eager-fullgraph, export and inductor on 14k ```nn.Modules``` crawled from 1.4k github projects to get coverage level, find and fix bugs. This dashboard page is used to track the pass rates of different backends. \r\n\r\nHow to repro:\r\n* Checkout https://github.com/jansel/pytorch-jit-paritybench\r\n* Batch evaluation with different backends:\r\n * dynamo-eager: ```python main.py --backend eager```\r\n * dynamo-eager-fullgraph: ```python main.py --backend eager --fullgraph```\r\n * export: ```python main.py --compile_mode export```\r\n * inductor: ```python main.py```\r\n* Adhoc evaluation:\r\n * ```pytest ./generated/{filename}.py -k test_{n}``` (e.g, ```pytest ./generated/test_KunpengLi1994_VSRN.py -k test_002```)\r\n * ```python -e ./generated/{filename}.py --backend eager```\r\n\r\nBugs umbrella task(#92670)\r\n\r\ncc @ezyang @msaroufim @bdhirsh @anijain2305 @zou3519 @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @chenyang78 @aakhundov @kadeng @soumith @wconstab @ngimel @Xia-Weiwen @desertfire @davidberard98", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/81a4aeabdf9d550ceda52a5060f19568de61b265", "file_loc": "{'base_commit': '81a4aeabdf9d550ceda52a5060f19568de61b265', 'files': [{'path': 'test/dynamo/test_misc.py', 'status': 'modified', 'Loc': {\"('MiscTests', None, 40)\": {'add': [2965]}, \"('MiscTests', 'fn', 421)\": {'mod': [422]}, \"('MiscTests', 'test_numel', 420)\": {'mod': [425]}}}, {'path': 'torch/_dynamo/variables/tensor.py', 'status': 'modified', 'Loc': {\"('TensorVariable', 'call_method', 178)\": {'mod': [206]}}}, {'path': 'torch/_dynamo/variables/torch.py', 'status': 'modified', 'Loc': {\"('TorchVariable', 'can_constant_fold_through', 159)\": {'add': [165]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/_dynamo/variables/torch.py", "torch/_dynamo/variables/tensor.py"], "doc": [], "test": ["test/dynamo/test_misc.py"], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "aac9e5288f7a9666884705e2b716c260cb5f9afc", "iss_html_url": "https://github.com/pytorch/pytorch/issues/67002", "iss_label": "module: windows\nmodule: multiprocessing\ntriaged\nskipped", "title": "DISABLED test_fs_sharing (__main__.TestMultiprocessing)", "body": "Flaky failures in the last week: https://fburl.com/scuba/opensource_ci_jobs/inmj698k. They only appear to be on windows\r\n\r\nPlatforms: rocm\r\n\r\ncc @peterjc123 @mszhanyi @skyline75489 @nbcsm @VitalyFedyunin", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/aac9e5288f7a9666884705e2b716c260cb5f9afc", "file_loc": "{'base_commit': 'aac9e5288f7a9666884705e2b716c260cb5f9afc', 'files': [{'path': 'test/test_multiprocessing.py', 'status': 'modified', 'Loc': {\"('TestMultiprocessing', 'test_receive', 289)\": {'add': [293]}, '(None, None, None)': {'mod': [19, 27]}, \"('TestMultiprocessing', 'test_fill', 259)\": {'mod': [269, 270]}, \"('TestMultiprocessing', None, 251)\": {'mod': [361]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": ["test/test_multiprocessing.py"], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "e37a22128eca7ccac6e289659587a9e1bfe6d242", "iss_html_url": "https://github.com/pytorch/pytorch/issues/15052", "iss_label": "oncall: jit", "title": "Tracer doesn't work with join/wait", "body": "Reported error: `RuntimeError: output 0 of traced region did not have observable data dependence with trace inputs; this probably indicates your program cannot be understood by the tracer.`\r\n\r\nTo reproduce:\r\n```python\r\ndef test_async_script_trace(self):\r\n class Module(torch.jit.ScriptModule):\r\n def __init__(self):\r\n super(Module, self).__init__(False)\r\n\r\n @torch.jit.script_method\r\n def forward(self, x):\r\n future = torch.jit._fork(torch.neg, x)\r\n outputs = []\r\n outputs.append(torch.jit._wait(future))\r\n\r\n return outputs\r\n\r\n class Tuple(nn.Module):\r\n def __init__(self):\r\n super(Tuple, self).__init__()\r\n self.module = Module()\r\n\r\n def forward(self, x):\r\n return tuple(self.module(x))\r\n\r\n x = torch.rand(3, 4)\r\n module = torch.jit.trace(Tuple(), (x), _force_outplace=True)\r\n self.assertEqual(module(x), torch.neg(x))", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/e37a22128eca7ccac6e289659587a9e1bfe6d242", "file_loc": "{'base_commit': 'e37a22128eca7ccac6e289659587a9e1bfe6d242', 'files': [{'path': 'aten/src/ATen/core/jit_type.h', 'status': 'modified', 'Loc': {\"(None, 'FutureType', 516)\": {'add': [534]}}}, {'path': 'test/test_jit.py', 'status': 'modified', 'Loc': {\"('TestAsync', None, 11055)\": {'add': [11224]}}}, {'path': 'torch/csrc/jit/graph_executor.cpp', 'status': 'modified', 'Loc': {'(None, None, 505)': {'mod': [516, 530, 531, 532, 533]}}}, {'path': 'torch/csrc/jit/tracer.cpp', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [39]}}}, {'path': 'torch/csrc/jit/tracer.h', 'status': 'modified', 'Loc': {\"(None, 'function', 42)\": {'add': [44]}, \"(None, 'tracer', 24)\": {'mod': [35, 36, 37, 38]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/csrc/jit/tracer.h", "torch/csrc/jit/tracer.cpp", "aten/src/ATen/core/jit_type.h", "torch/csrc/jit/graph_executor.cpp"], "doc": [], "test": ["test/test_jit.py"], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "0c091380cc03b23e68dde7368f3b910c21deb010", "iss_html_url": "https://github.com/pytorch/pytorch/issues/21680", "iss_label": "high priority\nmodule: cudnn\nmodule: nn\ntriaged\nsmall", "title": "Disable nondeterministic CTCLoss from cuDNN", "body": "## \ud83d\udc1b Bug\r\n\r\n\r\n\r\n## To Reproduce\r\n\r\nSteps to reproduce the behavior:\r\n\r\n1.I i updated pytorch version and ctc\uff0cuse pytorch_nightly, but in my train ,nn.CTCloss() is still zero,so,i would like to ask if the version pytorch(nightly) has been solved this problem\r\n1.\r\n1.\r\n\r\n\r\n\r\n## Expected behavior\r\n\r\n\r\n\r\n## Environment\r\n\r\nPlease copy and paste the output from our\r\n[environment collection script](https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py)\r\n(or fill out the checklist below manually).\r\n\r\nYou can get the script and run it with:\r\n```\r\nwget https://raw.githubusercontent.com/pytorch/pytorch/master/torch/utils/collect_env.py\r\n# For security purposes, please check the contents of collect_env.py before running it.\r\npython collect_env.py\r\n```\r\n\r\n - PyTorch Version (e.g., 1.0):\r\n - OS (e.g., Linux):\r\n - How you installed PyTorch (`conda`, `pip`, source):\r\n - Build command you used (if compiling from source):\r\n - Python version:\r\n - CUDA/cuDNN version:\r\n - GPU models and configuration:\r\n - Any other relevant information:\r\n\r\n## Additional context\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/0c091380cc03b23e68dde7368f3b910c21deb010", "file_loc": "{'base_commit': '0c091380cc03b23e68dde7368f3b910c21deb010', 'files': [{'path': 'aten/src/ATen/native/LossCTC.cpp', 'status': 'modified', 'Loc': {\"(None, 'ctc_loss', 341)\": {'mod': [367]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code\nC"}, "loctype": {"code": ["aten/src/ATen/native/LossCTC.cpp"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "5d08c7201fa5b4641f4277bf248c944b2c297b94", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/843", "iss_label": "bug", "title": "permission denied", "body": "**Bug description**\r\nI'm running basic G4F code from README:\r\n```\r\nimport g4f\r\n\r\n\r\nresponse = g4f.ChatCompletion.create(\r\n model=g4f.models.gpt_4,\r\n messages=[{\"role\": \"user\", \"content\": \"hi\"}],\r\n) # alterative model setting\r\n\r\nprint(response)\r\n```\r\n\r\nError:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\IonE\\Desktop\\main.py\", line 3, in \r\n import g4f\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\__init__.py\", line 1, in \r\n from . import models\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\models.py\", line 3, in \r\n from .Provider import Bard, BaseProvider, GetGpt, H2o, Liaobots, Vercel, Equing\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\__init__.py\", line 6, in \r\n from .Bard import Bard\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\Bard.py\", line 11, in \r\n class Bard(AsyncProvider):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\Bard.py\", line 22, in Bard\r\n cookies: dict = get_cookies(\".google.com\"),\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\g4f\\Provider\\base_provider.py\", line 45, in get_cookies\r\n for cookie in browser_cookie3.load(cookie_domain):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 1233, in load\r\n for cookie in cookie_fn(domain_name=domain_name):\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 1160, in chrome\r\n return Chrome(cookie_file, domain_name, key_file).load()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 489, in load\r\n with _DatabaseConnetion(self.cookie_file) as con:\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 349, in __enter__\r\n return self.get_connection()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 383, in get_connection\r\n con = method()\r\n File \"D:\\Program Files\\Python399\\lib\\site-packages\\browser_cookie3\\__init__.py\", line 374, in __get_connection_legacy\r\n shutil.copyfile(self.__database_file, self.__temp_cookie_file)\r\n File \"D:\\Program Files\\Python399\\lib\\shutil.py\", line 264, in copyfile\r\n with open(src, 'rb') as fsrc:\r\nPermissionError: [Errno 13] Permission denied: 'C:\\\\Users\\\\IonE\\\\AppData\\\\Roaming\\\\..\\\\Local\\\\Google\\\\Chrome\\\\User Data\\\\Default\\\\Network\\\\Cookies'\r\n```\r\n\r\n**Environement**\r\n- python 3.9.9\r\n- ukraine", "code": null, "pr_html_url": "https://github.com/xtekky/gpt4free/pull/847", "commit_html_url": null, "file_loc": "{'base_commit': '5d08c7201fa5b4641f4277bf248c944b2c297b94', 'files': [{'path': 'g4f/Provider/Bard.py', 'status': 'modified', 'Loc': {\"('Bard', 'create_async', 17)\": {'add': [33], 'mod': [22]}}}, {'path': 'g4f/Provider/Bing.py', 'status': 'modified', 'Loc': {\"('Bing', 'create_async_generator', 21)\": {'add': [34], 'mod': [24]}}}, {'path': 'g4f/Provider/Hugchat.py', 'status': 'modified', 'Loc': {\"('Hugchat', 'create_completion', 17)\": {'add': [25], 'mod': [23]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Bing.py", "g4f/Provider/Hugchat.py", "g4f/Provider/Bard.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "3430b04f870d982d7fba34e3b9d6e5cf3bd3b847", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1003", "iss_label": "bug", "title": "please delete site chat.aivvm.com", "body": "**Known Issues** // delete this\r\nplease delete site `chat.aivvm.com`\r\n\r\n**Delete site description**\r\nGpt4free maintainers, I am the administrator of chat.aivvm.com and request to delete this site. My website is already under high load\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/xtekky/gpt4free/commit/3430b04f870d982d7fba34e3b9d6e5cf3bd3b847", "file_loc": "{'base_commit': '3430b04f870d982d7fba34e3b9d6e5cf3bd3b847', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 218)': {'mod': [218]}, '(None, None, 281)': {'mod': [281]}, '(None, None, 374)': {'mod': [374]}}}, {'path': 'etc/testing/test_chat_completion.py', 'status': 'modified', 'Loc': {\"(None, 'run_async', 19)\": {'mod': [22]}}}, {'path': 'g4f/Provider/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}}}, {'path': 'g4f/Provider/Aivvm.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'mod': [3, 4, 5, 21, 22]}}}, {'path': 'g4f/Provider/deprecated/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14]}}}, {'path': 'g4f/models.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23, 47, 56, 66, 170, 171, 172, 173, 178, 179, 183, 184, 188, 189]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u7528\u6237\u8bf7\u6c42\u9879\u76ee\u5220\u9664\u81ea\u5df1\u7f51\u5740", "iss_reason": "2", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Aivvm.py", "g4f/Provider/deprecated/__init__.py", "g4f/Provider/__init__.py", "g4f/models.py"], "doc": ["README.md"], "test": ["etc/testing/test_chat_completion.py"], "config": [], "asset": []}} +{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/188", "iss_label": "fixed", "title": "can not run hackingtool !!!!! SyntaxError: invalid syntax ???", "body": "# python3 ./hackingtool.py\r\n\r\nTraceback (most recent call last):\r\n File \"/root/hackingtool/./hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/root/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\n__________________________________________________________________________________\r\n\r\n\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/177", "iss_label": "", "title": "Help me please", "body": "Traceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/189", "iss_label": "", "title": "look", "body": "![image](https://user-images.githubusercontent.com/93758292/155468650-b9d57e21-6c82-4005-a3ee-1783699e7f11.png)\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/209", "iss_label": "", "title": "Running Issue", "body": "**Describe the bug**\nA clear and concise description of what the bug is.\n\nI have installed this tool successfully though when i go to run it with or it comes up with the error I have attached as a screenshot. Why would this be happening?\n\n**To Reproduce**\nSteps to reproduce the behavior:\n1. Go to '...'\n2. Click on '....'\n3. Scroll down to '....'\n4. See error\n\n**Expected behavior**\nA clear and concise description of what you expected to happen.\n\n**Screenshots**\nIf applicable, add screenshots to help explain your problem.\n![image](https://user-images.githubusercontent.com/13176339/161368818-26bf3219-9ba7-4bab-a451-65d379c6d405.jpeg)\n\n**Desktop (please complete the following information):**\n - OS: Kali\n - Browser [e.g. chrome, safari]\n - Version [e.g. 22]\n\n**Smartphone (please complete the following information):**\n - Device: rpi4\n - OS: [e.g. iOS8.1]\n - Browser [e.g. stock browser, safari]\n - Version [e.g. 22]\n\n**Additional context**\nAdd any other context about the problem here.", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/185", "iss_label": "fixed", "title": "SyntaxError: invalid syntax traceback most recent call last", "body": "\u250c\u2500\u2500(root\ud83d\udc80localhost)-[~]\r\n\u2514\u2500# hackingtool\r\nTraceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n\r\n\r\n\r\n\r\nthis happens when i type in \"hackingtool\" in terminal. any fixes?", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "0a4faeac9c4f93a61c937b0e57023b693beeca6f", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/187", "iss_label": "fixed", "title": "from tools.ddos import DDOSTools", "body": "# sudo hackingtool\r\nTraceback (most recent call last):\r\n File \"/usr/share/doc/hackingtool/hackingtool.py\", line 11, in \r\n from tools.ddos import DDOSTools\r\n File \"/usr/share/doc/hackingtool/tools/ddos.py\", line 29\r\n \"sudo\", \"python3 ddos\", method, url, socks_type5.4.1, threads, proxylist, multiple, timer])\r\n ^\r\nSyntaxError: invalid syntax\r\n", "code": null, "pr_html_url": "https://github.com/Z4nzu/hackingtool/pull/176", "commit_html_url": null, "file_loc": "{'base_commit': '0a4faeac9c4f93a61c937b0e57023b693beeca6f', 'files': [{'path': 'tools/ddos.py', 'status': 'modified', 'Loc': {\"('ddos', 'run', 20)\": {'mod': [29]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/ddos.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "bf0886bae0ccbc8c5d285b6e2affe7e40474f970", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/16924", "iss_label": "Bug\nEasy\nmodule:metrics", "title": "Matthews correlation coefficient metric throws misleading division by zero RuntimeWarning", "body": "#### Description\r\nWith tested values all equal, `sklearn.metrics.matthews_corrcoef` throws a `RuntimeWarning` reporting a division by zero. This behavior was already reported in #1937 and reported fixed, but reappears in recent versions.\r\n\r\n#### Steps/Code to Reproduce\r\nThe snippet below reproduces the warning.\r\n```python\r\nimport sklearn.metrics \r\ntrues = [1,0,1,1,0] \r\npreds = [0,0,0,0,0] \r\nsklearn.metrics.matthews_corrcoef(trues, preds)\r\n```\r\n\r\n#### Expected Results\r\nNo warning is thrown.\r\n\r\n#### Actual Results\r\nThe following warning is thrown:\r\n```\r\nC:\\anaconda\\envs\\sklearn-test\\lib\\site-packages\\sklearn\\metrics\\_classification.py:900: RuntimeWarning: invalid value encountered in double_scalars\r\n mcc = cov_ytyp / np.sqrt(cov_ytyt * cov_ypyp)\r\n```\r\n\r\n#### Versions\r\n```\r\nSystem:\r\n python: 3.8.2 (default, Mar 25 2020, 08:56:29) [MSC v.1916 64 bit (AMD64)]\r\nexecutable: C:\\anaconda\\envs\\sklearn-test\\python.exe\r\n machine: Windows-10-10.0.18362-SP0\r\n\r\nPython dependencies:\r\n pip: 20.0.2\r\nsetuptools: 46.1.3.post20200330\r\n sklearn: 0.22.1\r\n numpy: 1.18.1\r\n scipy: 1.4.1\r\n Cython: None\r\n pandas: None\r\nmatplotlib: None\r\n joblib: 0.14.1\r\n```\r\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/19977", "commit_html_url": null, "file_loc": "{'base_commit': 'bf0886bae0ccbc8c5d285b6e2affe7e40474f970', 'files': [{'path': 'sklearn/metrics/_classification.py', 'status': 'modified', 'Loc': {\"(None, 'matthews_corrcoef', 800)\": {'mod': [881, 883, 886]}}}, {'path': 'sklearn/metrics/tests/test_classification.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23, 625]}, \"(None, 'test_matthews_corrcoef', 671)\": {'mod': [687, 688, 690, 691, 694, 696, 697]}, \"(None, 'test_matthews_corrcoef_multiclass', 713)\": {'mod': [734, 737, 738, 739, 757, 761, 762, 763, 765, 766]}}}, {'path': 'sklearn/utils/_testing.py', 'status': 'modified', 'Loc': {\"(None, 'assert_warns_div0', 190)\": {'mod': [190, 191, 193, 194, 196, 197, 198, 199, 200, 201, 203, 204, 205, 206, 207, 208, 209, 210, 211]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/metrics/_classification.py", "sklearn/utils/_testing.py"], "doc": [], "test": ["sklearn/metrics/tests/test_classification.py"], "config": [], "asset": []}} +{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "4ac6a90a82e4a8d7b5338c18ae8a16559c98ba10", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/5101", "iss_label": "", "title": "LatentDirichletAllocation has superfluous attributes", "body": "It has `dirichlet_component_` (undocumented) and `exp_dirichlet_component_` (exponential of same). I propose to get rid of at least the latter.\n", "code": null, "pr_html_url": "https://github.com/scikit-learn/scikit-learn/pull/5111", "commit_html_url": null, "file_loc": "{'base_commit': '4ac6a90a82e4a8d7b5338c18ae8a16559c98ba10', 'files': [{'path': 'sklearn/decomposition/online_lda.py', 'status': 'modified', 'Loc': {\"('LatentDirichletAllocation', '_approx_bound', 542)\": {'add': [579], 'mod': [597, 612]}, \"('LatentDirichletAllocation', '_init_latent_vars', 283)\": {'mod': [305, 306, 308]}, \"('LatentDirichletAllocation', '_em_step', 366)\": {'mod': [407, 408]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1\n\u591a\u4f59\u7684\u9009\u9879", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/decomposition/online_lda.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "05123af1b2f8db1bc4f05c22515ef378cbeefbd3", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/76", "iss_label": "Bug", "title": "Sparse cumsum functions do not work", "body": "e.g. SparseSeries.cumsum\n", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/wesm/pandas/commit/05123af1b2f8db1bc4f05c22515ef378cbeefbd3", "file_loc": "{'base_commit': '05123af1b2f8db1bc4f05c22515ef378cbeefbd3', 'files': [{'path': 'pandas/core/frame.py', 'status': 'modified', 'Loc': {\"('DataFrame', None, 97)\": {'mod': [1962, 1963, 1964, 1966, 1967, 1968, 1969, 1971, 1972, 1973, 1974, 1975, 1976, 1977, 1978, 1979, 1980, 1981, 1982, 1983, 1984, 1985, 2021, 2022, 2023, 2025, 2026, 2027, 2028, 2030, 2031, 2032, 2033, 2034, 2035, 2036, 2037, 2038, 2039, 2041, 2043]}}}, {'path': 'pandas/core/generic.py', 'status': 'modified', 'Loc': {\"('PandasGeneric', '_reindex_axis', 162)\": {'add': [168]}}}, {'path': 'pandas/core/series.py', 'status': 'modified', 'Loc': {\"('Series', 'cumsum', 570)\": {'mod': [580, 581, 582, 583, 584, 585, 591, 592, 593, 594]}}}, {'path': 'pandas/core/sparse.py', 'status': 'modified', 'Loc': {\"('SparseSeries', None, 152)\": {'add': [512]}, \"('SparseDataFrame', 'count', 1058)\": {'add': [1059]}, '(None, None, None)': {'mod': [13]}}}, {'path': 'pandas/tests/test_frame.py', 'status': 'modified', 'Loc': {\"('TestDataFrame', None, 539)\": {'add': [2271]}, \"('TestDataFrame', 'test_cumsum', 2271)\": {'add': [2276, 2283, 2284], 'mod': [2273, 2274, 2286, 2287]}}}, {'path': 'pandas/tests/test_sparse.py', 'status': 'modified', 'Loc': {\"('TestSparseSeries', None, 111)\": {'add': [577]}, \"('TestSparseDataFrame', 'test_count', 1065)\": {'add': [1068]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["pandas/core/frame.py", "pandas/core/generic.py", "pandas/core/series.py", "pandas/core/sparse.py"], "doc": [], "test": ["pandas/tests/test_frame.py", "pandas/tests/test_sparse.py"], "config": [], "asset": []}} +{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "65c0441a41b2dcaeebb648274d30978419a8661a", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/16607", "iss_label": "Datetime\nCompat", "title": "to_datetime should support ISO week year", "body": "`to_datetime` does not currently seem to support `ISO week year` like `strptime` does:\r\n\r\n```\r\nIn [38]: datetime.date(2016, 1, 1).strftime('%G-%V')\r\nOut[38]: '2015-53'\r\n\r\nIn [39]: datetime.datetime.strptime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', '%G-%V-%u')\r\nOut[39]: datetime.datetime(2015, 12, 28, 0, 0)\r\n\r\nIn [41]: pd.to_datetime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', format='%G-%V-%u')\r\n ---------------------------------------------------------------------------\r\n TypeError Traceback (most recent call last)\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 443 try:\r\n --> 444 values, tz = tslib.datetime_to_datetime64(arg)\r\n 445 return DatetimeIndex._simple_new(values, name=name, tz=tz)\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.datetime_to_datetime64 (pandas/_libs/tslib.c:33275)()\r\n\r\n TypeError: Unrecognized value type: \r\n\r\n During handling of the above exception, another exception occurred:\r\n\r\n ValueError Traceback (most recent call last)\r\n in ()\r\n ----> 1 pd.to_datetime(datetime.date(2016, 1, 1).strftime('%G-%V')+'-1', format='%G-%V-%u')\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in to_datetime(arg, errors, dayfirst, yearfirst, utc, box, format, exact, unit, infer_datetime_format, origin)\r\n 516 result = _convert_listlike(arg, box, format)\r\n 517 else:\r\n --> 518 result = _convert_listlike(np.array([arg]), box, format)[0]\r\n 519 \r\n 520 return result\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 445 return DatetimeIndex._simple_new(values, name=name, tz=tz)\r\n 446 except (ValueError, TypeError):\r\n --> 447 raise e\r\n 448 \r\n 449 if arg is None:\r\n\r\n /Users/Robin/.pyenv/versions/3.6.1/lib/python3.6/site-packages/pandas/core/tools/datetimes.py in _convert_listlike(arg, box, format, name, tz)\r\n 412 try:\r\n 413 result = tslib.array_strptime(arg, format, exact=exact,\r\n --> 414 errors=errors)\r\n 415 except tslib.OutOfBoundsDatetime:\r\n 416 if errors == 'raise':\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.array_strptime (pandas/_libs/tslib.c:63124)()\r\n\r\n pandas/_libs/tslib.pyx in pandas._libs.tslib.array_strptime (pandas/_libs/tslib.c:63003)()\r\n\r\n ValueError: 'G' is a bad directive in format '%G-%V-%u'\r\n\r\n```\r\n\r\n
      \r\nINSTALLED VERSIONS\r\n------------------\r\ncommit: None\r\n\r\npandas: 0.20.1\r\npytest: 3.1.0\r\npip: 9.0.1\r\nsetuptools: 28.8.0\r\nCython: 0.25.2\r\nnumpy: 1.12.1\r\nscipy: 0.19.0\r\nxarray: None\r\nIPython: 6.0.0\r\nsphinx: None\r\npatsy: 0.4.1\r\ndateutil: 2.6.0\r\npytz: 2017.2\r\nblosc: None\r\nbottleneck: None\r\ntables: 3.4.2\r\nnumexpr: 2.6.2\r\nfeather: None\r\nmatplotlib: 2.0.2\r\nopenpyxl: None\r\nxlrd: None\r\nxlwt: None\r\nxlsxwriter: None\r\nlxml: None\r\nbs4: None\r\nhtml5lib: 0.999999999\r\nsqlalchemy: 1.1.10\r\npymysql: None\r\npsycopg2: 2.7.1 (dt dec pq3 ext lo64)\r\njinja2: 2.9.6\r\ns3fs: None\r\npandas_gbq: None\r\npandas_datareader: None\r\n
      \r\n", "code": null, "pr_html_url": "https://github.com/pandas-dev/pandas/pull/25541", "commit_html_url": null, "file_loc": "{'base_commit': '65c0441a41b2dcaeebb648274d30978419a8661a', 'files': [{'path': 'doc/source/whatsnew/v0.25.0.rst', 'status': 'modified', 'Loc': {'(None, None, 21)': {'add': [21]}}}, {'path': 'pandas/_libs/tslibs/strptime.pyx', 'status': 'modified', 'Loc': {'(None, None, 79)': {'add': [79]}, '(None, None, 171)': {'add': [171]}, '(None, None, 267)': {'add': [267]}, '(None, None, 513)': {'add': [513]}, '(None, None, 520)': {'add': [520]}, '(None, None, 521)': {'add': [521]}, '(None, None, 622)': {'add': [622]}, '(None, None, 57)': {'mod': [57]}, '(None, None, 178)': {'mod': [178]}, '(None, None, 271)': {'mod': [271, 272, 273, 274]}, '(None, None, 596)': {'mod': [596, 597]}, '(None, None, 600)': {'mod': [600]}}}, {'path': 'pandas/core/tools/datetimes.py', 'status': 'modified', 'Loc': {\"(None, 'to_datetime', 403)\": {'add': [457]}}}, {'path': 'pandas/tests/indexes/datetimes/test_tools.py', 'status': 'modified', 'Loc': {\"('TestToDatetime', None, 246)\": {'add': [246]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": "0", "info_type": "Code/Doc"}, "loctype": {"code": ["pandas/core/tools/datetimes.py", "pandas/_libs/tslibs/strptime.pyx"], "doc": ["doc/source/whatsnew/v0.25.0.rst"], "test": ["pandas/tests/indexes/datetimes/test_tools.py"], "config": [], "asset": []}} {"organization": "pallets", "repo_name": "flask", "base_commit": "6ed68f015a50ab35b84a8ea71b0f846ca6a75281", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/3074", "iss_label": "", "title": "send_file doesn't urlencode ':/' in unicode attachment_filename", "body": "### Expected Behavior\r\n\r\nWhen sending files with unicode filename (with `:` or `/`) they should be downloaded with name from `filename*` field.\r\n\r\n```python\r\n# -*- coding: utf-8 -*-\r\nimport os\r\nfrom flask import Flask, send_from_directory\r\napp = Flask(__name__)\r\n@app.route('/test/', methods=['GET'])\r\ndef test_route():\r\n tmp_dir = os.getcwd()\r\n tmp_filename = __file__\r\n attachment_filename = u'\u0442\u0435\u0441\u0442:\u0442\u0435\u0441\u0442_\u0442\u0435\u0441\u0442.py'\r\n return send_from_directory(\r\n tmp_dir,\r\n tmp_filename,\r\n as_attachment=True,\r\n attachment_filename=attachment_filename\r\n )\r\nif __name__ == '__main__':\r\n app.run(host='::', port=5000)\r\n```\r\n### Actual Behavior\r\n\r\nSome browsers (Chrome-based/Safari) ignore `filename*` field when it contains colon or slash. For example file `\u0442\u0435\u0441\u0442:\u0442\u0435\u0441\u0442_\u0442\u0435\u0441\u0442.py` gets downloaded in Chrome/Safari as `__.py` but in Firefox as `\u0442\u0435\u0441\u0442_\u0442\u0435\u0441\u0442_\u0442\u0435\u0441\u0442.py` which is acceptable in my opinion.\r\n\r\nFlask response:\r\n`Content-Disposition: attachment; filename*=\"UTF-8''%D1%82%D0%B5%D1%81%D1%82:%D1%82%D0%B5%D1%81%D1%82_%D1%82%D0%B5%D1%81%D1%82.py\"; filename=\":_.py\"`\r\n\r\n### Environment\r\n\r\n* Python version: 2.7.15\r\n* Flask version: 1.0.2\r\n* Werkzeug version: 0.14.1\r\n", "pr_html_url": "https://github.com/pallets/flask/pull/3273", "file_loc": "{'base_commit': '6ed68f015a50ab35b84a8ea71b0f846ca6a75281', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}}}, {'path': 'flask/helpers.py', 'status': 'modified', 'Loc': {\"(None, 'send_file', 454)\": {'mod': [579]}}}, {'path': 'tests/test_helpers.py', 'status': 'modified', 'Loc': {\"('TestSendfile', None, 436)\": {'add': [648]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["flask/helpers.py"], "doc": ["CHANGES.rst"], "test": ["tests/test_helpers.py"], "config": [], "asset": []}} {"organization": "pallets", "repo_name": "flask", "base_commit": "50b7dcbab343c93bb6738bbf116a177e72b1d9ec", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/4099", "iss_label": "docs", "title": "Harmless race condition in tutorial", "body": "I was browsing the flaskr tutorial when I noticed an (admittedly quite unlikely) race condition in the `register` view, specifically:\r\n\r\n```py\r\nif not username:\r\n error = 'Username is required.'\r\nelif not password:\r\n error = 'Password is required.'\r\nelif db.execute(\r\n 'SELECT id FROM user WHERE username = ?', (username,)\r\n).fetchone() is not None:\r\n error = f\"User {username} is already registered.\"\r\n\r\nif error is None:\r\n db.execute(\r\n 'INSERT INTO user (username, password) VALUES (?, ?)',\r\n (username, generate_password_hash(password))\r\n )\r\n db.commit()\r\n return redirect(url_for('auth.login'))\r\n```\r\n\r\nIf two requests arrive with the right timing, the following can happen:\r\n\r\n```\r\n Request 1: Request 2:\r\nSELECT id\r\n FROM user\r\n WHERE username = abc\r\n |\r\n v\r\nempty, no such user\r\n\r\n SELECT id\r\n FROM user\r\n WHERE username = abc\r\n |\r\n v\r\n empty, no such user\r\n\r\nINSERT INTO user (username, password)\r\n VALUES (abc, 123)\r\n |\r\n v\r\n ok\r\n\r\n INSERT INTO user (username, password)\r\n VALUES (abc, 456)\r\n |\r\n v\r\n failed UNIQUE constraint -> \r\n -> sqlite3.IntegrityError ->\r\n -> user gets HTTP 500\r\n```\r\n\r\nWhile the likelihood of this happening is pretty small and the harm practically zero (user gets HTTP 500 and has to manually login/choose a different username), I feel like this is not really the sort of good practice the tutorial should teach. I also believe it's important the developer understands that it's the UNIQUE constraint that ensures their app works correctly and not the if condition in the application code (the tutorial mentions SQL injection attacks and explains what protects the developer against them, so I don't really feel this is out of scope).\r\n\r\nIn my own app I've modified the code to the following:\r\n```py\r\nif not username:\r\n error = 'Username is required.'\r\nelif not password:\r\n error = 'Password is required.'\r\nelse:\r\n try:\r\n db.execute(\r\n 'INSERT INTO users (username, password) VALUES (?, ?)',\r\n (username, generate_password_hash(password))\r\n )\r\n db.commit()\r\n except IntegrityError:\r\n error = f\"User {username} is already registered.\"\r\n else:\r\n return redirect(url_for('auth.login'))\r\n```\r\n\r\nI suggest something similar be incorporated into the tutorial, with a short explanation (maybe a comment) of how the UNIQUE constraint does the work for the developer and maybe a note about the principle that one should \"ask forgiveness, not permission.\" I'm not sure on how it's better worded, so I'm making this an issue instead of a pull request.\r\n\r\nCheers, and thank you for your great work!", "pr_html_url": "https://github.com/pallets/flask/pull/4139", "file_loc": "{'base_commit': '50b7dcbab343c93bb6738bbf116a177e72b1d9ec', 'files': [{'path': 'docs/tutorial/views.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [202], 'mod': [94, 95, 96, 97, 100, 101, 102, 103, 104, 105, 128, 129, 130, 131, 132, 133, 134, 136, 137, 138, 139, 141, 142, 143, 144, 145, 146, 147]}}}, {'path': 'examples/tutorial/flaskr/auth.py', 'status': 'modified', 'Loc': {\"(None, 'register', 47)\": {'mod': [63, 64, 65, 66, 67, 70, 71, 72, 73, 74, 75, 76, 77]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["examples/tutorial/flaskr/auth.py"], "doc": ["docs/tutorial/views.rst"], "test": [], "config": [], "asset": []}} {"organization": "pallets", "repo_name": "flask", "base_commit": "f17e6061fcffdc290f615d3fdc9d949e9e719574", "iss_has_pr": 1, "iss_html_url": "https://github.com/pallets/flask/issues/1443", "iss_label": "", "title": "json_encoder not invoked from flask.jsonify", "body": "I created a custom JSON encoder class extended from flask.json.JSONEncoder but it is not called when calling flask.jsonify. Additionally, I removed my custom JSON encoder and confirmed that flask.json.JSONEncoder isn't called either via a break statement in Pycharm.\n\n```\nfrom flask import Flask\nfrom flask import jsonify\nfrom flask.json import JSONEncoder\n\nclass MyEncoder(JSONEncoder):\n def default(self, obj):\n if hasattr(obj, '__json__'):\n return obj.__json__()\n else:\n try:\n iterable = iter(obj)\n except TypeError:\n pass\n else:\n return list(iterable)\n\n return JSONEncoder.default(self, obj)\n\n\nclass MyClass(object):\n key = 'a'\n value = 'b'\n\n def __json__(self):\n return {'key': self.key, 'value': self.value}\n\napp = Flask(__name__)\napp.json_encoder = MyEncoder\n\n@app.route('/')\ndef hello_world():\n return jsonify(MyClass())\n\n\nif __name__ == '__main__':\n app.run(debug=True)\n```\n", "pr_html_url": "https://github.com/pallets/flask/pull/1671", "file_loc": "{'base_commit': 'f17e6061fcffdc290f615d3fdc9d949e9e719574', 'files': [{'path': 'AUTHORS', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17, 20], 'mod': [35]}}}, {'path': 'CHANGES', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}}}, {'path': 'docs/security.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [98, 100, 101, 102, 103, 105, 106, 107, 108, 109, 111, 112, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 125, 127, 128, 129, 130, 132, 133, 134, 135, 137, 138, 140, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 161, 162, 163, 164, 165, 166, 167, 168, 170, 171, 172, 173, 174, 175]}}}, {'path': 'flask/json.py', 'status': 'modified', 'Loc': {\"(None, 'jsonify', 201)\": {'add': [244], 'mod': [202, 203, 204, 205, 225, 226, 248, 249]}}}, {'path': 'tests/test_helpers.py', 'status': 'modified', 'Loc': {\"('TestJSON', 'test_json_as_unicode', 121)\": {'add': [122], 'mod': [124, 125, 126, 127, 129, 130, 131, 132]}, \"('TestJSON', None, 32)\": {'mod': [34, 35, 37, 38, 39, 40, 42, 43, 45, 46, 47, 48, 49, 50, 106, 107, 121]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["flask/json.py"], "doc": ["docs/security.rst", "CHANGES"], "test": ["tests/test_helpers.py"], "config": [], "asset": ["AUTHORS"]}} @@ -776,335 +776,335 @@ {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "c4a996adfc91f023b46ce3cb67e33fc8b2ca3627", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/9400", "iss_label": "Visualization\nError Reporting", "title": "Improve error message in plotting.py's _plot", "body": "This a minor enhancement proposal. At the moment I cannot submit a pull request. I will probably have time to create one during the next week. \n\nThis is a snippet from `tools/plotting.py`: https://github.com/pydata/pandas/blob/master/pandas/tools/plotting.py#L2269-2283\n\n``` python\ndef _plot(data, x=None, y=None, subplots=False,\n ax=None, kind='line', **kwds):\n kind = _get_standard_kind(kind.lower().strip())\n if kind in _all_kinds:\n klass = _plot_klass[kind]\n else:\n raise ValueError('Invalid chart type given %s' % kind)\n\n from pandas import DataFrame\n if kind in _dataframe_kinds:\n if isinstance(data, DataFrame):\n plot_obj = klass(data, x=x, y=y, subplots=subplots, ax=ax,\n kind=kind, **kwds)\n else:\n raise ValueError('Invalid chart type given %s' % kind)\n```\n\nWhich results in following error message:\n\n```\nC:\\Anaconda3\\lib\\site-packages\\pandas\\tools\\plotting.py in plot_series(series, label, kind, use_index, rot, xticks, yticks, xlim, ylim, ax, style, grid, legend, logx, logy, secondary_y, **kwds)\n 2231 klass = _plot_klass[kind]\n 2232 else:\n-> 2233 raise ValueError('Invalid chart type given %s' % kind)\n 2234 \n 2235 \"\"\"\n\nValueError: Invalid chart type given hist\n```\n\nI would suggest using the format string `\"Invalid chart type given: '%s'\"` instead.\n", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/9417", "file_loc": "{'base_commit': 'c4a996adfc91f023b46ce3cb67e33fc8b2ca3627', 'files': [{'path': 'pandas/tools/plotting.py', 'status': 'modified', 'Loc': {\"(None, '_plot', 2269)\": {'mod': [2269, 2277]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["pandas/tools/plotting.py"], "doc": [], "test": [], "config": [], "asset": []}} {"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "53243e8ec73ecf5035a63f426a9c703d6835e9a7", "iss_has_pr": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/54889", "iss_label": "Build", "title": "BUILD: Race condition between .pxi.in and .pyx compiles in parallel build of 2.1.0", "body": "### Installation check\n\n- [X] I have read the [installation guide](https://pandas.pydata.org/pandas-docs/stable/getting_started/install.html#installing-pandas).\n\n\n### Platform\n\nLinux-6.4.7-gentoo-dist-x86_64-AMD_Ryzen_5_3600_6-Core_Processor-with-glibc2.38\n\n### Installation Method\n\nBuilt from source\n\n### pandas Version\n\n2.1.0\n\n### Python Version\n\n3.11.5\n\n### Installation Logs\n\n
      \r\nBuild log excerpt\r\n\r\n```\r\ngpep517 build-wheel --backend mesonpy --output-fd 3 --wheel-dir /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/wheel --config-json {\"builddir\": \"/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10\", \"setup-args\": [], \"compile-args\": [\"-v\", \"-j12\", \"-l0\"]}\r\n2023-08-31 07:02:26,275 gpep517 INFO Building wheel via backend mesonpy\r\n+ meson setup /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0 /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10 -Dbuildtype=release -Db_ndebug=if-release -Db_vscrt=md --vsenv --native-file=/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/meson-python-native-file.ini\r\nThe Meson build system\r\nVersion: 1.2.1\r\nSource dir: /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0\r\nBuild dir: /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10\r\nBuild type: native build\r\nProject name: pandas\r\nProject version: 2.1.0\r\nC compiler for the host machine: x86_64-pc-linux-gnu-gcc (gcc 13.2.1 \"x86_64-pc-linux-gnu-gcc (Gentoo 13.2.1_p20230826 p7) 13.2.1 20230826\")\r\nC linker for the host machine: x86_64-pc-linux-gnu-gcc ld.bfd 2.41\r\nC++ compiler for the host machine: x86_64-pc-linux-gnu-g++ (gcc 13.2.1 \"x86_64-pc-linux-gnu-g++ (Gentoo 13.2.1_p20230826 p7) 13.2.1 20230826\")\r\nC++ linker for the host machine: x86_64-pc-linux-gnu-g++ ld.bfd 2.41\r\nCython compiler for the host machine: cython (cython 0.29.36)\r\nHost machine cpu family: x86_64\r\nHost machine cpu: x86_64\r\nProgram python found: YES (/usr/bin/python3.10)\r\nFound pkg-config: /usr/bin/pkg-config (1.8.1)\r\nRun-time dependency python found: YES 3.10\r\nBuild targets in project: 53\r\n\r\npandas 2.1.0\r\n\r\n User defined options\r\n Native files: /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/meson-python-native-file.ini\r\n buildtype : release\r\n vsenv : True\r\n b_ndebug : if-release\r\n b_vscrt : md\r\n\r\nFound samurai-1.9 at /usr/bin/samu\r\n\r\nVisual Studio environment is needed to run Ninja. It is recommended to use Meson wrapper:\r\n/usr/lib/python-exec/python3.10/meson compile -C .\r\n\r\nGenerating targets: 0%| | 0/53 eta ?\r\nGenerating targets: 98%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u258a| 52/53 eta 00:00\r\n \r\n\r\nWriting build.ninja: 0%| | 0/225 eta ?\r\n \r\n+ /usr/bin/samu -v -j12 -l0\r\n[\u2026]\r\nsamu: job failed: cython -M --fast-fail -3 --include-dir /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/pandas/_libs '-X always_allow_keywords=true' /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0/pandas/_libs/interval.pyx -o pandas/_libs/interval.cpython-310-x86_64-linux-gnu.so.p/pandas/_libs/interval.pyx.c\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.binomial\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.bytes\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.chisquare\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.choice\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.dirichlet\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.exponential\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.f\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.gamma\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.geometric\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.pareto\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.gumbel\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.poisson\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.negative_binomial\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.normal\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.laplace\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.logistic\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.lognormal\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.logseries\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.power\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.ranf\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.randint\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.random\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.random_integers\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.random_sample\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.rayleigh\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.sample\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.standard_exponential\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.standard_gamma\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.standard_normal\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.uniform\u001b[0m\r\n\u001b[33mWARNING \u001b[0m \u001b[34mOverriding pythran description with argspec information for: numpy.random.weibull\u001b[0m\r\n\r\nError compiling Cython file:\r\n------------------------------------------------------------\r\n...\r\n bint kh_exist_strbox(kh_strbox_t*, khiter_t) nogil\r\n\r\n khuint_t kh_needed_n_buckets(khuint_t element_n) nogil\r\n\r\n\r\ninclude \"khash_for_primitive_helper.pxi\"\r\n^\r\n------------------------------------------------------------\r\n\r\n/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0/pandas/_libs/khash.pxd:129:0: 'khash_for_primitive_helper.pxi' not found\r\n```\r\n
      \r\n\r\nFull build log: [dev-python:pandas-2.1.0:20230831-050223.log](https://github.com/pandas-dev/pandas/files/12482393/dev-python.pandas-2.1.0.20230831-050223.log)\r\n\r\n```\r\n$ find /tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/ -name '*.pxi'\r\n/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/pandas/_libs/intervaltree.pxi\r\n/tmp/portage/dev-python/pandas-2.1.0/work/pandas-2.1.0-python3_10/pandas/_libs/sparse_op_helper.pxi\r\n```\r\n\r\nIt looks that meson files do not declare dependencies between `khash_for_primitive_helper.pxi` and `khash.pxd` files, so the former isn't necessarily created before the latter is attempt to be compiled.", "pr_html_url": "https://github.com/pandas-dev/pandas/pull/54958", "file_loc": "{'base_commit': '53243e8ec73ecf5035a63f426a9c703d6835e9a7', 'files': [{'path': 'pandas/_libs/meson.build', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["pandas/_libs/meson.build"]}} {"organization": "meta-llama", "repo_name": "llama", "base_commit": "7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e", "iss_has_pr": 1, "iss_html_url": "https://github.com/meta-llama/llama/issues/658", "iss_label": "documentation", "title": "Confusion about the default max_seq_len = 2048", "body": "When reading the class Transformer, I found that the code use max_seq_len * 2 to prepare the rotary positional encoding, which confused me for a while. Then I realized that the default max_seq_len was set to 2048, and the 'max_seq_len * 2' aims to generate 4096 positional embeddings, corresponding to the 4K context length in the paper. I understand it can achieve the purpose but why not setting max_seq_len directly to 4096? which is more clear and less likely to cause misconception.\r\n\r\nself.freqs_cis = precompute_freqs_cis(\r\n self.params.dim // self.params.n_heads, self.params.max_seq_len * 2\r\n )", "pr_html_url": "https://github.com/meta-llama/llama/pull/754", "file_loc": "{'base_commit': '7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e', 'files': [{'path': 'llama/model.py', 'status': 'modified', 'Loc': {\"('Transformer', '__init__', 414)\": {'add': [450]}}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "commit_html_url": null, "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "pr", "loc_scope": null, "info_type": null}, "loctype": {"code": ["llama/model.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "f88765d504ce2fa9bc3926c76910b11510522892", "is_iss": 0, "iss_html_url": "https://github.com/pallets/flask/issues/1224", "iss_label": "", "title": "Starting up a public server.", "body": "I ran into this problem today with one of my applications trying to make it public to my local network. \n\nC:\\Users\\Savion\\Documents\\GitHub\\Example-Flask-Website>flask\\Scripts\\python run.\npy\n- Running on http://127.0.0.1:5000/\n- Restarting with reloader\n 10.101.37.124 - - [26/Oct/2014 15:51:23] \"GET / HTTP/1.1\" 404 -\n- Running on http://0.0.0.0:5000/\n 10.101.37.124 - - [26/Oct/2014 15:51:38] \"GET / HTTP/1.1\" 404 -\n\nThe problem that i run into is the fact that this app continuously attempts to default to localhost. It is not until 2 Ctrl + C, that it goes to 0.0.0.0, then I still receive a 404 error in my browser. I do have routes that are valid when running locally. I have tried to create a new virtualenv and i still recieve the same error, I reset the firewall rule on this application. All effort that did not return rewarded.\n\nAny Ideas onto why my app makes an attempt to startup on the localhost first then moves over, but then returns a 404?\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f88765d504ce2fa9bc3926c76910b11510522892', 'files': [{'path': 'flask/views.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1\n404 error", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/views.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "2d8a21c7321a9ead8e27208b49a18f4b8b27e2c1", "is_iss": 0, "iss_html_url": "https://github.com/pallets/flask/issues/834", "iss_label": "", "title": "How to get the serialized version of the session cookie in 0.10?", "body": "In version 0.9 I could simply get the value of the `session` like this: \n\n```\nflask.session.serialize()\n```\n\nBut after upgrading to 0.10 this is not working anymore.. what's the alternative? How can I get the session value?\n\n(`flask.request.cookies.get('session')` is not good for me, because I would like to get the session right after login, so it's not part of the request yet)\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2d8a21c7321a9ead8e27208b49a18f4b8b27e2c1', 'files': [{'path': 'flask/sessions.py', 'Loc': {\"('SecureCookieSessionInterface', 'get_signing_serializer', 308)\": {'mod': []}, \"('TaggedJSONSerializer', 'dumps', 60)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nhow to do \u2026", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/sessions.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "22d82e70b3647ed16c7d959a939daf533377382b", "is_iss": 0, "iss_html_url": "https://github.com/pallets/flask/issues/4015", "iss_label": "", "title": "2.0.0: build requires ContextVar module", "body": "Simple I cannot find it.\r\n```console\r\n+ /usr/bin/python3 setup.py build '--executable=/usr/bin/python3 -s'\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 4, in \r\n setup(\r\n File \"/usr/lib/python3.8/site-packages/setuptools/__init__.py\", line 144, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"/usr/lib64/python3.8/distutils/core.py\", line 121, in setup\r\n dist.parse_config_files()\r\n File \"/usr/lib/python3.8/site-packages/setuptools/dist.py\", line 689, in parse_config_files\r\n parse_configuration(self, self.command_options,\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 121, in parse_configuration\r\n meta.parse()\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 426, in parse\r\n section_parser_method(section_options)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 399, in parse_section\r\n self[name] = value\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 184, in __setitem__\r\n value = parser(value)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 515, in _parse_version\r\n version = self._parse_attr(value, self.package_dir)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 349, in _parse_attr\r\n module = import_module(module_name)\r\n File \"/usr/lib64/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"\", line 1014, in _gcd_import\r\n File \"\", line 991, in _find_and_load\r\n File \"\", line 975, in _find_and_load_unlocked\r\n File \"\", line 671, in _load_unlocked\r\n File \"\", line 783, in exec_module\r\n File \"\", line 219, in _call_with_frames_removed\r\n File \"/home/tkloczko/rpmbuild/BUILD/Flask-2.0.0/src/flask/__init__.py\", line 7, in \r\n from .app import Flask\r\n File \"/home/tkloczko/rpmbuild/BUILD/Flask-2.0.0/src/flask/app.py\", line 19, in \r\n from werkzeug.local import ContextVar\r\nImportError: cannot import name 'ContextVar' from 'werkzeug.local' (/usr/lib/python3.8/site-packages/werkzeug/local.py)\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '22d82e70b3647ed16c7d959a939daf533377382b', 'files': [{'path': 'setup.py', 'Loc': {'(None, None, None)': {'mod': [7]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "43e2d7518d2e89dc7ed0b4ac49b2d20211ad1bfa", "is_iss": 1, "iss_html_url": "https://github.com/pallets/flask/issues/2977", "iss_label": "", "title": "Serial port access problem in DEBUG mode.", "body": "### Expected Behavior\r\n\r\nSending commands through the serial port.\r\n\r\n```python\r\napp = Flask(__name__)\r\nserialPort = serial.Serial(port = \"COM5\", baudrate=1000000,\r\n bytesize=8, timeout=2, stopbits=serial.STOPBITS_ONE)\r\n\r\nlamp = {\r\n 1 : {'name' : 'n1', 'state' : True},\r\n 2 : {'name' : 'n2', 'state' : True} \r\n}\r\n\r\n@app.route(\"/\")\r\ndef hello():\r\n templateData = {\r\n 'lamp': lamp\r\n }\r\n\r\n \r\n return render_template('main.html', **templateData)\r\n\r\n\r\n@app.route(\"/setPin/\")\r\ndef action(action):\r\n\r\n if action == \"on\":\r\n\r\n serialPort.write(b\"n2c1111\\r\\n\")\r\n lamp[1][\"state\"] = True\r\n\r\n if action == \"off\":\r\n serialPort.write(b\"n2c0000\\r\\n\")\r\n lamp[1][\"state\"] = False\r\n\r\n\r\n templateData = {\r\n 'lamp': lamp\r\n }\r\n\r\n return render_template('main.html', **templateData)\r\n\r\nif __name__ == \"__main__\":\r\n app.run(host='0.0.0.0', port=5000, debug=True)\r\n```\r\n\r\n\r\n### Actual Behavior\r\n\r\nI can not access the serial port with FLASK_ENV = development and FLASK_DEBUG = 1. Everything works fine with DEBUG mode disabled.\r\n\r\n```pytb\r\nFLASK_APP = app.py\r\nFLASK_ENV = development\r\nFLASK_DEBUG = 1\r\nIn folder C:/Users/user/PycharmProjects/Ho_server\r\nC:\\Users\\user\\Anaconda3\\python.exe -m flask run\r\n * Serving Flask app \"app.py\" (lazy loading)\r\n * Environment: development\r\n * Debug mode: on\r\n * Restarting with stat\r\n * Debugger is active!\r\n * Debugger PIN: 138-068-963\r\n * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)\r\n127.0.0.1 - - [30/Oct/2018 10:49:27] \"GET /setPin/on HTTP/1.1\" 500 -\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\flask\\_compat.py\", line 35, in reraise\r\n raise value\r\n File \"C:\\Users\\user\\PycharmProjects\\H_server\\app.py\", line 8, in \r\n bytesize=8, timeout=2, stopbits=serial.STOPBITS_ONE)\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialwin32.py\", line 31, in __init__\r\n super(Serial, self).__init__(*args, **kwargs)\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialutil.py\", line 240, in __init__\r\n self.open()\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialwin32.py\", line 62, in open\r\n raise SerialException(\"could not open port {!r}: {!r}\".format(self.portstr, ctypes.WinError()))\r\nserial.serialutil.SerialException: could not open port 'COM5': PermissionError(13, 'Access is denied.', None, 5)\r\n```\r\n\r\n### Environment\r\n\r\n* Python version: 3.6.5\r\n* Flask version: 1.0.2\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [7], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pallets", "repo_name": "flask", "base_commit": "1a7fd980f8579bd7d7d53c812a77c1dc64be52ba", "is_iss": 0, "iss_html_url": "https://github.com/pallets/flask/issues/1749", "iss_label": "", "title": "JSONEncoder and aware datetimes", "body": "I was surprised to see that though flask.json.JSONEncoder accepts datetime objects, it ignores the timezone. I checked werkzeug.http.http_date and it can handle timezone aware dates just fine if they are passed in, but the JSONEncoder insists on transforming the datetime to a timetuple, like this\n\n `return http_date(o.timetuple())`\n\nThis means i have to convert all my dates to utc before encoding them, otherwise I should overwrite the dafault() method in the encoder. Can you help me understand why the encoder was made to function with naive dates only?\nThx\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1a7fd980f8579bd7d7d53c812a77c1dc64be52ba', 'files': [{'path': 'flask/json.py', 'Loc': {\"('JSONEncoder', 'default', 60)\": {'mod': [78]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/json.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "144d43830f663808c5fbca75b797350060acf7dd", "is_iss": 0, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/559", "iss_label": "", "title": "Results files saved to specific folder", "body": "Having just installed Sherlock I was surprised to see the results files are just jumbled in with everything else instead of being in their own Results folder.\r\n\r\nHaving a separate folder would keep things cleaner especially as you use it more and the number of files increases.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '144d43830f663808c5fbca75b797350060acf7dd', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 65)': {'mod': [65]}}, 'status': 'modified'}, {'path': 'sherlock/sherlock.py', 'Loc': {\"(None, 'main', 462)\": {'mod': [478]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\n+\nDoc"}, "loctype": {"code": ["sherlock/sherlock.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "7ec56895a37ada47edd6573249c553379254d14a", "is_iss": 0, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1911", "iss_label": "question", "title": "How do you search for usernames? New to this. ", "body": "\r\n\r\n## Checklist\r\n\r\n- [ ] I'm asking a question regarding Sherlock\r\n- [ ] My question is not a tech support question.\r\n\r\n**We are not your tech support**. \r\nIf you have questions related to `pip`, `git`, or something that is not related to Sherlock, please ask them on [Stack Overflow](https://stackoverflow.com/) or [r/learnpython](https://www.reddit.com/r/learnpython/)\r\n\r\n\r\n## Question\r\n\r\nASK YOUR QUESTION HERE\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7ec56895a37ada47edd6573249c553379254d14a', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "65ce128b7fd8c8915c40495191d9c136f1d2322b", "is_iss": 0, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1297", "iss_label": "bug", "title": "name 'requests' is not defined", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n- [x] I'm reporting a bug in Sherlock's functionality\r\n- [x] The bug I'm reporting is not a false positive or a false negative\r\n- [x] I've verified that I'm running the latest version of Sherlock\r\n- [x] I've checked for similar bug reports including closed ones\r\n- [x] I've checked for pull requests that attempt to fix this bug\r\n\r\n## Description\r\n\r\n\r\nERROR: Problem while attempting to access data file URL 'https://raw.githubusercontent.com/sherlock-project/sherlock/master/sherlock/resources/data.json': name 'requests' is not defined\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '65ce128b7fd8c8915c40495191d9c136f1d2322b', 'files': [{'path': 'sherlock/sites.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sherlock/sites.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "f63e17066dc4881ee5a164aed60b6e8f1e9ab129", "is_iss": 0, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/462", "iss_label": "environment", "title": "File \"sherlock.py\", line 24, in from requests_futures.sessions import FuturesSession ModuleNotFoundError: No module named 'requests_futures'", "body": "File \"sherlock.py\", line 24, in \r\n from requests_futures.sessions import FuturesSession\r\nModuleNotFoundError: No module named 'requests_futures'", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f63e17066dc4881ee5a164aed60b6e8f1e9ab129', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "6c6faff416896a41701aa3e24e5b5a584bd5cb44", "is_iss": 0, "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/236", "iss_label": "question", "title": "No module named 'torrequest'", "body": "Hi,\r\nsimilar problem to module \"requests_futures\"\r\n\r\nTraceback (most recent call last):\r\n File \"sherlock.py\", line 25, in \r\n from torrequest import TorRequest\r\nModuleNotFoundError: No module named 'torrequest'\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6c6faff416896a41701aa3e24e5b5a584bd5cb44', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "c0d95fd6c2cd8ffc0738819825c3065e3c89977c", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/4954", "iss_label": "", "title": "TimeDistributed Wrapper not working with LSTM/GRU", "body": "Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on [StackOverflow](http://stackoverflow.com/questions/tagged/keras) or [join the Keras Slack channel](https://keras-slack-autojoin.herokuapp.com/) and ask there instead of filing a GitHub issue.\r\n\r\nThank you!\r\n\r\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\r\npip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\r\n\r\n- [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found [here](https://www.tensorflow.org/get_started/os_setup).\r\n\r\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\r\npip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\r\n\r\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\r\n\r\n\r\nI am trying to apply word level attention to words in a document after passing the sentences through a GRU. However, the TimeDistributed Wrapper isn't working with GRU/LSTM. \r\n\r\nI get the following error \r\n\r\n```python\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/engine/topology.py\", line 569, in __call__\r\n self.add_inbound_node(inbound_layers, node_indices, tensor_indices)\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/engine/topology.py\", line 632, in add_inbound_node\r\n Node.create_node(self, inbound_layers, node_indices, tensor_indices)\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/engine/topology.py\", line 164, in create_node\r\n output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/layers/wrappers.py\", line 129, in call\r\n y = self.layer.call(X) # (nb_samples * timesteps, ...)\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/layers/recurrent.py\", line 201, in call\r\n input_shape = K.int_shape(x)\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/backend/theano_backend.py\", line 128, in int_shape\r\n raise Exception('Not a Keras tensor:', x)\r\nException: ('Not a Keras tensor:', Reshape{3}.0)\r\n```\r\nThe code snipped is written below\r\n\r\n```python\r\ninput_layer = Input(shape=(document_size, img_h,), dtype='int32', name='input_layer')\r\nembedding_layer = TimeDistributed(Embedding(len(W2V), img_w, input_length=img_h, weights=[W2V], trainable=True, mask_zero=True))(input_layer)\r\ngru_word = TimeDistributed(GRU(GRU_layers[0], return_sequences=True, activation=conv_non_linear))(embedding_layer)\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c0d95fd6c2cd8ffc0738819825c3065e3c89977c', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "980a6be629610ee58c1eae5a65a4724ce650597b", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/16234", "iss_label": "type:support", "title": "Compiling model in callback causes TypeError", "body": "**System information**.\r\n- Have I written custom code (as opposed to using a stock example script provided in Keras): yes\r\n- TensorFlow version (use command below): 2.8.0 (2.4 too)\r\n- Python version: 3.7\r\n\r\n**Describe the problem**.\r\n\r\nIn a fine-tuning case I would like to do transfer-learning phase first (with fine-tuned layers frozen) and after that, all layers should be unfrozen. I wrote a callback that unfreeze the layers after few epochs. Unfortunately, after changing the layers' `trainable` attribute, the model should be recompiled - and the recompilation causes the `TypeError` (see colab). \r\n\r\nI am aware that I can workaround this by compiling and fitting model twice - for both phases separately - but the usage of callback seems more elegant to me.\r\n\r\n**Standalone code to reproduce the issue**.\r\n\r\nhttps://colab.research.google.com/drive/1u6VlH6EIQGXSp7vEIngTasp3v2EE42Wi?usp=sharing\r\n\r\n**Source code / logs**.\r\n\r\n```\r\nTypeError: 'NoneType' object is not callable\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '980a6be629610ee58c1eae5a65a4724ce650597b', 'files': [{'path': 'keras/engine/training.py', 'Loc': {\"('Model', 'make_train_function', 998)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "90f441a6a0ed4334cac53760289061818a68b7c1", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/2893", "iss_label": "", "title": "Is the cifar10_cnn.py example actually performing data augmentation?", "body": "When `datagen.fit(X_train)` is called in the [`cifar10_cnn.py` example](https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py#L103), shouldn't it be (when `data_augmentation=True`):\n\n``` python\ndatagen.fit(X_train, augment=True)\n```\n\nas the [default value for `augment` is `False`](https://github.com/fchollet/keras/blob/master/keras/preprocessing/image.py#L410)?\n\nAlso, I am right in thinking when using `augment=True` the original (i.e. non-augmented - ignoring any normalisation/standardisation) data is not necessarily trained on? If so, I thought data augmentation is a method of artificially increasing the size of your dataset, so shouldn't we additionally be training on the non-augmented data? Thanks\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '90f441a6a0ed4334cac53760289061818a68b7c1', 'files': [{'path': 'keras/preprocessing/image.py', 'Loc': {\"('ImageDataGenerator', 'fit', 404)\": {'mod': [419, 420, 421, 422, 423]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/preprocessing/image.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "654404c2ed8db47a5361a3bff9126a16507c9c4c", "is_iss": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/1787", "iss_label": "", "title": "What happened to WordContextProduct?", "body": "``` python\nIn [1]: import keras\n\nIn [2]: keras.__version__\nOut[2]: '0.3.2'\n\nIn [3]: from keras.layers.embeddings import WordContextProduct\nUsing Theano backend.\n/usr/local/lib/python3.5/site-packages/theano/tensor/signal/downsample.py:5: UserWarning: downsample module has been moved to the pool module.\n warnings.warn(\"downsample module has been moved to the pool module.\")\n---------------------------------------------------------------------------\nImportError Traceback (most recent call last)\n in ()\n----> 1 from keras.layers.embeddings import WordContextProduct\n\nImportError: cannot import name 'WordContextProduct'\n```\n\nThis page now returns a 404: https://github.com/fchollet/keras/blob/master/examples/skipgram_word_embeddings.py\n\nWas this code taken out of keras, or just moved somewhere else?\n\nThanks,\n\nZach\n\n---\n\nPlease make sure that the boxes below are checked before you submit your issue. Thank you!\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\nI'm trying something like this:\n\n``` python\nmodels = []\n\n# Word vectors\nmodel_word = Sequential()\nmodel_word.add(Embedding(1e4, 300, input_length=1))\nmodel_word.add(Reshape(dims=(300,)))\nmodels.append(model_word)\n\n# Context vectors\nmodel_context = Sequential()\nmodel_context.add(Embedding(1e4, 300, input_length=1))\nmodel_context.add(Reshape(dims=(300,)))\nmodels.append(model_context)\n\n# Combined model\nmodel = Sequential()\nmodel.add(Merge(models, mode='dot'))\nmodel.add(Dense(1))\nmodel.add(Activation('sigmoid'))\nmodel.compile(loss='mean_squared_error', optimizer=Adam(lr=0.001))\n```\n\nDoes that look reasonable? And then as input, I need to provide 2 lists of indexes?\n\"", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [54], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "8778add0d66aed64a8970c34576bf5800bc19170", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/3335", "iss_label": "", "title": "Masking the output of a conv layer", "body": "Hi,\nI am trying to apply a given mask in the output of a conv layer. The simplest form of my problem can be seen in the img\n\n![image](https://cloud.githubusercontent.com/assets/810340/17194147/e8728ad4-542c-11e6-8c60-b2949c288cec.png)\n\nThe mask should be considered as an input when training/predicting. I have already tried to use the Merge layer (mode='mul') to apply the input mask as follows:\n\n``` python\nmain_input= Input(shape=(3, 64, 64))\nmask1_input = Input(shape=(1, 64, 64))\nmask2_input = Input(shape=(1, 64, 64))\n\nconv1 = Convolution2D(1,7,7, border_mode='same')(main_input)\nmerged_model1 = Sequential()\nmerged_model1.add(Merge([conv1, mask1_input], mode='mul'))\n\nconv2 = Convolution2D(1, 7,7, border_mode='same')(main_input)\nmerged_model2 = Sequential()\nmerged_model2.add(Merge([conv2, mask2_input], mode='mul'))\n\nmodel = Sequential()\nmodel.add(Merge([merged_model1,merged_model2],mode='sum'))\n```\n\nBut it is not working, maybe because I'm trying to merge a layer with a Tensor. But even if I could do that, I don't feel this is the right way to do that. Can someone help?\n\nPlease make sure that the boxes below are checked before you submit your issue. Thank you!\n- [X] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [X] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8778add0d66aed64a8970c34576bf5800bc19170', 'files': [{'path': 'keras/src/models/model.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/models/model.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "ed07472bc5fc985982db355135d37059a1f887a9", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/13101", "iss_label": "type:support", "title": "model.fit : AttributeError: 'Model' object has no attribute '_compile_metrics'", "body": "**System information** \r\n- Have I written custom code (as opposed to using example directory): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Mint 19.3\r\n- TensorFlow backend (yes / no): yes\r\n- TensorFlow version: 2.0.0b1\r\n- Keras version: 2.2.4-tf\r\n- Python version: 3.6\r\n- CUDA/cuDNN version: /\r\n- GPU model and memory: GTX 940MX, 430.26\r\n\r\n**Describe the current behavior** \r\nThe model.fit() function throws a `AttributeError: 'Model' object has no attribute '_compile_metrics'` exception.\r\n\r\n**Describe the expected behavior** \r\nIt should work ?\r\n\r\n**Code to reproduce the issue** \r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\ninput_3D = tf.keras.Input(shape=(None, None, None, 1)) # unknown width, length and depth, 1 gray channel\r\nnetwork_3D = tf.keras.layers.Conv3D(\r\n filters = 128, # dimensionality of output space\r\n kernel_size = 5, # shape of 2D convolution window (5x5)\r\n strides = 1, # stride of convolution along all spatial dimensions\r\n padding = \"same\", data_format = \"channels_last\", # input with shape (batch, height, width, channels)\r\n activation = tf.keras.layers.LeakyReLU(alpha = 0.2), # activation function to use\r\n use_bias = True,\r\n kernel_initializer = tf.keras.initializers.TruncatedNormal(stddev = 1e-2),\r\n # initializer for the kernel weights matrix\r\n bias_initializer = 'zeros', # initializer for the bias vector\r\n input_shape = (None, None, None, 1)\r\n)(input_3D)\r\nnetwork_3D = tf.keras.layers.BatchNormalization(\r\n momentum = 0.1, # momentum + decay = 1.0\r\n epsilon = 1e-5,\r\n scale = True\r\n)(network_3D)\r\n\r\nmodel = tf.keras.Model(inputs = input_3D, outputs = network_3D)\r\nmodel.loss = tf.losses.mean_squared_error\r\nmodel.optimizer = tf.keras.optimizers.Adam(learning_rate = 0.002)\r\nv = np.zeros((100,100,100,100))\r\nl = np.zeros((100,100,100))\r\nmodel.fit(v, l, epochs = 20, batch_size = 1)\r\n``` \r\n\r\n**Other info / logs** \r\n```python\r\nTraceback (most recent call last):\r\n File \".../venv/lib/python3.6/site-packages/IPython/core/interactiveshell.py\", line 3296, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"\", line 1, in \r\n history = model.fit(v, l, epochs = 20, batch_size = 1)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 643, in fit\r\n use_multiprocessing=use_multiprocessing)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py\", line 632, in fit\r\n shuffle=shuffle)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 2385, in _standardize_user_data\r\n metrics=self._compile_metrics,\r\nAttributeError: 'Model' object has no attribute '_compile_metrics'\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ed07472bc5fc985982db355135d37059a1f887a9', 'files': [{'path': 'keras/engine/training.py', 'Loc': {\"('Model', 'compile', 40)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "a3d160b9467c99cbb27f9aa0382c759f45c8ee66", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/9741", "iss_label": "", "title": "Improve Keras Documentation User Experience for Long Code Snippets By Removing The Need For Horizontal Slide Bars", "body": "**Category**: documentation user-experience\r\n**Comment**: modify highlight.js to wrap long documentation code snippets\r\n**Why**: eliminates the need for a user to manually click and slide a horizontal slider just to get a quick sense of what available parameters and their default values are\r\n\r\n**Context**\r\nWhile reading the documentation, and coming from a scikit-learn background, I really like how their documentation shows all the class and method parameters ([example page](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html)). It's very helpful to quickly be able to see the default parameters.\r\n\r\nTake [Dense](https://keras.io/layers/core/#dense) for example. If the documentation looked like this (imagine this a code block, not individually highlighted lines):\r\n\r\n`keras.layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)`\r\n\r\nBenefits:\r\n- easy to read\r\n- no scrolling a horizontal slider\r\n- immediately tells me the available parameters and their default values\r\n\r\nCompare that experience to the current Keras experience:\r\n\r\n```\r\nkeras.layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)\r\n```\r\n\r\nDisadvantages:\r\n- requires scrolling horizontally to see the rest\r\n- easy to lose track of where you are while scrolling\r\n- requires physical action to see everything\r\n\r\nThe Keras team no-doubt is busy with much bigger concerns than documentation formatting. One could say that the \"Arguments\" are all listed below or by clicking the \"Source\". True, however the key point I'm trying to make is usability, and quick readability. Reading through an \"Argument\"'s verbose description, or having to scroll horizontally is not quick nor an optimal experience.\r\n\r\nI'm not going to make a case for why making documentation easy-to-read is important. I think the Keras documentation **content** itself is outstanding.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a3d160b9467c99cbb27f9aa0382c759f45c8ee66', 'files': [{'path': 'docs/autogen.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["docs/autogen.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "7a12fd0f8597760cf8e1238a9b021e247693517b", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/2372", "iss_label": "", "title": "problem of save/load model", "body": "HI, \n\nThanks for making such a wonderful tool!\n\nI'm using Keras 1.0. I want to save and load the model both the arch and the parameters. So I use the method in FAQ. Here is the code.\n\n```\ndef save_model(self, model, options):\n json_string = model.to_json()\n open(options['file_arch'], 'w').write(json_string)\n model.save_weights(options['file_weight'])\n\ndef load_model(self, options):\n self.model = model_from_json(open(options['file_arch']).read())\n self.model.load_weights(options['file_weight'])\n return self.model\n```\n\nWhen I load model and use model.predict(), there is a error:\nAttributeError: 'NoneType' object has no attribute 'predict'\n\nDon't know why. If I don't load the model from file, just train a model and use it, everything seems ok.\n\nI checked the issues, most people just need to load the parameters. Is it possible when I load the architecture, I overwrite the old model and loose the model.predict()?\n\nThanks again for making Keras!\n\nBen\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7a12fd0f8597760cf8e1238a9b021e247693517b', 'files': [{'path': 'keras/src/trainers/trainer.py', 'Loc': {\"('Trainer', 'compile', 40)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/trainers/trainer.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "284ef7b495a61238dccc6149996c4cb88fef1c5a", "is_iss": 1, "iss_html_url": "https://github.com/keras-team/keras/issues/933", "iss_label": "", "title": "Same model but graph gives bad performance", "body": "Hello, \n\nI am learning to use Graph as it seems more powerful so I implemented one of my previous model which uses Sequential. Here is the model using sequential (number of dimension set in random):\n\n```\ndef build_generation_embedding_model(self, dim):\n print \"Build model ...\"\n input_model = Sequential()\n input_model.add(TimeDistributedDense(dim, input_shape=(10,10)))\n input_model.add(LSTM(dim, return_sequences=False))\n input_model.add(Dense(dim))\n canonical_model = Sequential()\n canonical_model.add(TimeDistributedDense(dim, input_shape=(15,15)))\n canonical_model.add(LSTM(dim, return_sequences=False))\n canonical_model.add(Dense(dim))\n self.model = Sequential()\n self.model.add(Merge([input_model, canonical_model], mode='concat'))\n self.model.add(Dense(15))\n self.model.add(Activation('softmax'))\n self.model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n```\n\nThe model works fine and below is my reimplementation using Graph:\n\n```\ndef build_generation_embedding_model_graph(self, dim):\n self.model = Graph()\n self.model.add_input(name='input1', input_shape=(10,10))\n self.model.add_input(name='canonical', input_shape=(15,15))\n self.model.add_node(TimeDistributedDense(dim), name='Embed_input1', input='input1')\n self.model.add_node(TimeDistributedDense(dim), name='Embed_canonical', input='canonical')\n self.model.add_node(LSTM(dim, return_sequences=False), name='Hidden_input1', input='Embed_input1')\n self.model.add_node(LSTM(dim, return_sequences=False), name='Hidden_canonical', input='Embed_canonical')\n self.model.add_node(Dense(15), name='merge', inputs=['Hidden_input1','Hidden_canonical'], merge_mode='concat')\n self.model.add_node(Activation('softmax'), name='activation', input='merge')\n self.model.add_output(name='output', input='merge')\n self.model.compile('rmsprop', {'output':'categorical_crossentropy'})\n```\n\nMy impression is that they are exactly the same model (grateful if somebody spotted something wrong there). But the model based on Graph gives a loss of 3.6 while the loss for the other one is around 0.002. \n\nIs there a reason for this please ?\n\nThank you for your help\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [36], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "c2b844ba2fe8d0d597da9ef6a9af3b20d18d0bec", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/7603", "iss_label": "", "title": "Loss Increases after some epochs ", "body": "I have tried different convolutional neural network codes and I am running into a similar issue. The network starts out training well and decreases the loss but after sometime the loss just starts to increase. I have shown an example below: \r\nEpoch 15/800\r\n1562/1562 [==============================] - 49s - loss: 0.9050 - acc: 0.6827 - val_loss: 0.7667 - val_acc: 0.7323\r\nEpoch 16/800\r\n1562/1562 [==============================] - 49s - loss: 0.8906 - acc: 0.6864 - val_loss: 0.7404 - val_acc: 0.7434\r\nEpoch 380/800\r\n1562/1562 [==============================] - 49s - loss: 1.5519 - acc: 0.4880 - val_loss: 1.4250 - val_acc: 0.5233\r\nEpoch 381/800\r\n1562/1562 [==============================] - 48s - loss: 1.5416 - acc: 0.4897 - val_loss: 1.5032 - val_acc: 0.4868\r\nEpoch 800/800\r\n1562/1562 [==============================] - 49s - loss: 1.8483 - acc: 0.3402 - val_loss: 1.9454 - val_acc: 0.2398\r\n\r\nI have tried this on different cifar10 architectures I have found on githubs. I am training this on a GPU Titan-X Pascal. This only happens when I train the network in batches and with data augmentation. I have changed the optimizer, the initial learning rate etc. I have also attached a link to the code. I just want a cifar10 model with good enough accuracy for my tests, so any help will be appreciated. The code is from this:\r\nhttps://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c2b844ba2fe8d0d597da9ef6a9af3b20d18d0bec', 'files': [{'path': 'examples/cifar10_cnn.py', 'Loc': {'(None, None, None)': {'mod': [65]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/cifar10_cnn.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "530eff62e5463e00d73e72c51cc830b9ac3a14ab", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/3997", "iss_label": "", "title": "Using keras for Distributed training raise RuntimeError(\"Graph is finalized and cannot be modified.\")", "body": "I'm using keras for distributed training with following code:\n\n``` python\n#!/usr/bin/env python\n# -*- coding:utf-8 -*-\n# Created by Enigma on 2016/9/26\n\nimport numpy as np\nimport tensorflow as tf\n\n# Define Hyperparameters\nFLAGS = tf.app.flags.FLAGS\n\n# For missions\ntf.app.flags.DEFINE_string(\"ps_hosts\", \"\",\n \"Comma-separated list of hostname:port pairs\")\ntf.app.flags.DEFINE_string(\"worker_hosts\", \"\",\n \"Comma-separated list of hostname:port pairs\")\ntf.app.flags.DEFINE_string(\"job_name\", \"\", \"One of 'ps', 'worker'\")\ntf.app.flags.DEFINE_integer(\"task_index\", 0, \"Index of task within the job\")\n\n# Hyperparameters\n\nfrom keras import backend as K\nfrom keras.layers import Input, Dense\nfrom keras.models import Model\n\n\ndef main(_):\n ps_hosts = FLAGS.ps_hosts.split(\",\")\n worker_hosts = FLAGS.worker_hosts.split(\",\")\n cluster = tf.train.ClusterSpec({\"ps\": ps_hosts, \"worker\": worker_hosts})\n\n server_config = tf.ConfigProto(\n gpu_options=tf.GPUOptions(allow_growth=True),\n log_device_placement=True)\n server = tf.train.Server(cluster, config=server_config,\n job_name=FLAGS.job_name, task_index=FLAGS.task_index)\n\n if FLAGS.job_name == \"ps\":\n server.join()\n elif FLAGS.job_name == \"worker\":\n with tf.device(tf.train.replica_device_setter(\n worker_device=\"/job:worker/task:%d/cpu:0\" % FLAGS.task_index,\n cluster=cluster)):\n global_step = tf.Variable(0, name='global_step', trainable=False)\n inputs = Input(shape=[1, ])\n hidden = Dense(10, activation='relu')(inputs)\n output = Dense(1, activation='sigmoid')(hidden)\n model = Model(input=inputs, output=output)\n\n saver = tf.train.Saver()\n summary_op = tf.merge_all_summaries()\n\n sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0),\n logdir=\"./checkpoint/\",\n # init_op=init_op,\n summary_op=summary_op,\n saver=saver,\n global_step=global_step,\n save_model_secs=60)\n with sv.managed_session(server.target) as sess:\n step = 0\n K.set_session(sess)\n model.compile(optimizer='sgd', loss='mse')\n while step < 1000000:\n train_x = np.random.randn(1)\n train_y = 2 * train_x + np.random.randn(1) * 0.33 + 10\n model.fit(train_x, train_y)\n sv.stop()\n\nif __name__ == \"__main__\":\n tf.app.run()\n```\n\nthen I run it with:\n\n```\n/opt/anaconda3/bin/python /cache/allenwoods/keras_dis_test.py --ps_hosts=0.0.0.0:48636 --worker_hosts=0.0.0.0:46261 --job_name=ps --task_index=0\n/opt/anaconda3/bin/python /cache/allenwoods/keras_dis_test.py --ps_hosts=0.0.0.0:48636 --worker_hosts=0.0.0.0:46261 --job_name=worker --task_index=0\n```\n\nit doesn't work and return\n\n```\nTraceback (most recent call last):\n File \"/cache/allenwoods/keras_dis_test.py\", line 73, in \n tf.app.run()\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/platform/app.py\", line 30, in run\n sys.exit(main(sys.argv[:1] + flags_passthrough))\n File \"/cache/allenwoods/keras_dis_test.py\", line 69, in main\n model.fit(train_x, train_y)\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 969, in managed_session\n self.stop(close_summary_writer=close_summary_writer)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 797, in stop\n stop_grace_period_secs=self._stop_grace_secs)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/coordinator.py\", line 386, in join\n six.reraise(*self._exc_info_to_raise)\n File \"/opt/anaconda3/lib/python3.5/site-packages/six.py\", line 686, in reraise\n raise value\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 959, in managed_session\n yield sess\n File \"/cache/allenwoods/VRLforTraffic/src/missions/keras_dis_test.py\", line 65, in main\n model.compile(optimizer='sgd', loss='mse')\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/engine/training.py\", line 484, in compile\n self.optimizer = optimizers.get(optimizer)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/optimizers.py\", line 580, in get\n instantiate=True, kwargs=kwargs)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/utils/generic_utils.py\", line 18, in get_from_module\n return res()\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/optimizers.py\", line 134, in __init__\n self.iterations = K.variable(0.)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py\", line 149, in variable\n v = tf.Variable(value, dtype=_convert_string_dtype(dtype), name=name)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 215, in __init__\n dtype=dtype)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 327, in _init_from_args\n self._snapshot = array_ops.identity(self._variable, name=\"read\")\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 3645, in get_controller\n yield default\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2891, in name_scope\n yield \"\" if new_stack is None else new_stack + \"/\"\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 293, in _init_from_args\n initial_value, name=\"initial_value\", dtype=dtype)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 657, in convert_to_tensor\n ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py\", line 180, in _constant_tensor_conversion_function\n return constant(v, dtype=dtype, name=name)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py\", line 167, in constant\n attrs={\"value\": tensor_value, \"dtype\": dtype_value}, name=name).outputs[0]\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2339, in create_op\n self._check_not_finalized()\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2080, in _check_not_finalized\n raise RuntimeError(\"Graph is finalized and cannot be modified.\")\nRuntimeError: Graph is finalized and cannot be modified.\n```\n\nI wondering if it happens because keras' model wasn't created as part of the graph used in tf.train.Supervisor, but I have not a clue on how to prove it or fix it. Any idea\uff1f\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '530eff62e5463e00d73e72c51cc830b9ac3a14ab', 'files': [{'path': 'keras/engine/training.py', 'Loc': {\"('Model', '_make_train_function', 685)\": {'mod': []}, \"('Model', '_make_test_function', 705)\": {'mod': []}, \"('Model', '_make_predict_function', 720)\": {'mod': []}}, 'status': 'modified'}, {'path': 'keras/backend/tensorflow_backend.py', 'Loc': {\"(None, 'manual_variable_initialization', 31)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py", "keras/backend/tensorflow_backend.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "c2e36f369b411ad1d0a40ac096fe35f73b9dffd3", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/4810", "iss_label": "", "title": "Parent module '' not loaded, cannot perform relative import with vgg16.py", "body": "just set up my ubuntu and have the python 3.5 installed, together with Keras...the following occurs:\r\n\r\nRESTART: /usr/local/lib/python3.5/dist-packages/keras/applications/vgg16.py \r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/keras/applications/vgg16.py\", line 14, in \r\n from ..models import Model\r\nSystemError: Parent module '' not loaded, cannot perform relative import\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c2e36f369b411ad1d0a40ac096fe35f73b9dffd3', 'files': [{'path': 'keras/applications/vgg16.py', 'Loc': {'(None, None, None)': {'mod': [14, 15, 16, 17, 18, 19, 20, 21]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/applications/vgg16.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "keras-team", "repo_name": "keras", "base_commit": "da86250e5a95a7adccabd8821b0d51508c82bddc", "is_iss": 0, "iss_html_url": "https://github.com/keras-team/keras/issues/18439", "iss_label": "stat:awaiting response from contributor\nstale\ntype:Bug", "title": "Problem with framework agnostic KerasVariable slicing with another KerasVariable", "body": "I defined a KerasVariable with shape (n,d) in a `keras.Layer()` using `self.add_weight()`. I've also defined another KerasVariable with shape (1) , dtype=\"int32\", and value 0. \r\n\r\n```\r\nself.first_variable = self.add_weight(\r\n initializer=\"zeros\", shape=(self.N,input_shape[-1]), trainable=False\r\n)\r\nself.second_variable = self.add_weight(initializer=\"zeros\",shape=(1), trainable=False, dtype=\"int32\")\r\n```\r\n\r\nDuring a call to this custom layer, I'm trying to retrieve a specific index of the first variable using the 2nd variable with:\r\n\r\n`self.first_variable[self.second_variable.value]`\r\n\r\nThis works as expected in pytorch backend, but throws an error in tensorflow backend.\r\n\r\n```\r\nOnly integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got \r\n\r\nArguments received by CustomLayer.call():\r\n \u2022 x=tf.Tensor(shape=(None, 1600), dtype=float32)\r\n \u2022 training=True\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'da86250e5a95a7adccabd8821b0d51508c82bddc', 'files': [{'path': 'keras/src/ops/core.py', 'Loc': {\"(None, 'slice', 388)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/ops/core.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "09d9f63c98f9c4fc0953dd3fd6fb4589e9e1f6f3", "is_iss": 1, "iss_html_url": "https://github.com/nvbn/thefuck/issues/376", "iss_label": "", "title": "Shell history polution", "body": "I haven't used this, but I just thought maybe this is not such a good idea because it's going to make traversing shell history really irritating. Does this do anything to get around that, or are there any workarounds?\n\nIf not, I know in zsh you can just populate the command line with whatever you want using LBUFFER and RBUFFER. What if you made it an option to type \"fuck\" then hit ctrl-F (for \"fuck\"), and it would just replace your command line with the correction, and if there's multiple candidates cycle through them by hitting ctrl-F again. That also lets you edit the correction however you want as well.\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ohmyzsh", "pro": "ohmyzsh", "path": ["plugins/thefuck"]}], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["plugins/thefuck"]}} -{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "6975d30818792f1b37de702fc93c66023c4c50d5", "is_iss": 0, "iss_html_url": "https://github.com/nvbn/thefuck/issues/1087", "iss_label": "", "title": "Thinks 'sl' is install python softlayer ", "body": "\r\n![image](https://user-images.githubusercontent.com/13007697/81414970-66971080-910d-11ea-8a44-da5ab9ca77f9.png)\r\nAh, yes. This wasn't a mis-spelling of ls at all, but me installing Python-Softlayer.\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.30 using Python 3.8.2 and Bash 5.0.16(1)-release\r\n\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\nManjaro\r\n\r\nHow to reproduce the bug:\r\n\r\nType sl in the terminal, then fuck\r\n\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6975d30818792f1b37de702fc93c66023c4c50d5', 'files': [{'path': 'thefuck/rules/sl_ls.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/sl_ls.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "921778a7cfa442409d17ab946c5f579e308c4f2b", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/404", "iss_label": "invalid", "title": "api\u8c03\u7528\u65f6\uff0c\u56de\u7b54\u7684\u5185\u5bb9\u4e2d\u51fa\u73b0\u83ab\u540d\u5176\u5999\u7684\u81ea\u52a8\u95ee\u7b54", "body": "\u4f7f\u7528\u7684baichuan-13b\u6a21\u578b\r\n\u4f7f\u7528\u7684scr/api_demo.py\r\n\u63d0\u95ee\u5185\u5bb9\u4e3a\uff1a\u4f60\u597d\r\n\u56de\u7b54\u4f1a\u5982\u56fe\r\n![image](https://github.com/hiyouga/LLaMA-Efficient-Tuning/assets/26214176/0d2beb92-e3b4-4126-a84f-d30bde97a194)\r\n\r\n\u4e0d\u660e\u767d\u4e3a\u4ec0\u4e48\u4f1a\u51fa\u73b0\u81ea\u52a8\u7684\u591a\u8f6e\u81ea\u6211\u95ee\u7b54", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '921778a7cfa442409d17ab946c5f579e308c4f2b', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nreadme\u4e2d\u63d0\u53ca", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "984b202f835d6f3f4869cbb1f0460bb2d9163fc1", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/6562", "iss_label": "solved", "title": "Batch Inference Error for qwen2vl Model After Full Fine-Tuning", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n- `llamafactory` version: 0.9.2.dev0\r\n- Python version: 3.8.20\r\n- PyTorch version: 2.4.1+cu121 (GPU)\r\n- Transformers version: 4.46.1\r\n- Datasets version: 3.1.0\r\n- Accelerate version: 1.0.1\r\n- PEFT version: 0.12.0\r\n- TRL version: 0.9.6\n\n### Reproduction\n\n\r\nI have fine-tuned the qwen2vl model using the command:\r\n\r\n```python\r\nllamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml\r\n```\r\nAfter saving the model in the \"saves\" directory, I attempted to perform batch inference using the provided script:\r\n\r\n```python\r\npython scripts/vllm_infer.py --model_name_or_path path_to_merged_model --dataset alpaca_en_demo\r\n```\r\nHowever, I encountered the following error:\r\n\r\n```python\r\nValueError: This model does not support image input.\r\n```\r\n\r\n1.The model_path I used points to the model saved after running the full fine-tuning script.\r\n2.I have successfully used the LoRA fine-tuned model (trained with the lora_sft script and merged with merge_lora script), which allows for inference using the method provided in the qwen2vl documentation.\r\n3.However, the model saved after full fine-tuning does not seem to support direct inference in the same way.\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '984b202f835d6f3f4869cbb1f0460bb2d9163fc1', 'files': [{'path': 'scripts/vllm_infer.py', 'Loc': {\"(None, 'vllm_infer', 38)\": {'mod': [43]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/vllm_infer.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "4ed2b629a51ef58d229c795e85238d40346ecb58", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/5478", "iss_label": "solved", "title": "Can we set default_system in yaml file when training?", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n- `llamafactory` version: 0.8.4.dev0\r\n- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyTorch version: 2.4.0+cu121 (GPU)\r\n- Transformers version: 4.44.2\r\n- Datasets version: 2.21.0\r\n- Accelerate version: 0.33.0\r\n- PEFT version: 0.12.0\r\n- TRL version: 0.9.6\r\n- GPU type: NVIDIA A800-SXM4-80GB\r\n- DeepSpeed version: 0.15.0\n\n### Reproduction\n\n llamafactory-cli train\n\n### Expected behavior\n\nWe do not need the `default_system` in `template.py`.\r\nSet `default_system` in training yaml file to overwrite so we do not need to modify the source code in `template.py`.\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '4ed2b629a51ef58d229c795e85238d40346ecb58', 'files': [{'path': 'data/', 'Loc': {}}, {'path': 'data/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["data/"]}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "18c6e6fea9dcc77c03b36301efe2025a87e177d5", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1971", "iss_label": "solved", "title": "llama'response repeat input then the answer", "body": "### Reminder\n\n- [ ] I have read the README and searched the existing issues.\n\n### Reproduction\n\n input_ids = tokenizer([\"[INST] \" +{text}\" + \" [/INST]\"], return_tensors=\"pt\", add_special_tokens=False).input_ids.to('cuda')\r\n\r\n generate_input = {\r\n \"input_ids\": input_ids,\r\n \"max_new_tokens\": 512,\r\n \"do_sample\": True,\r\n \"top_k\": 10,\r\n \"top_p\": 0.95,\r\n \"temperature\": 0.01,\r\n \"repetition_penalty\": 1.3,\r\n \"eos_token_id\": tokenizer.eos_token_id,\r\n \"bos_token_id\": tokenizer.bos_token_id,\r\n \"pad_token_id\": tokenizer.pad_token_id\r\n }\r\n\r\n generate_ids = model.generate(**generate_input)\r\n response = tokenizer.decode(generate_ids[0], skip_special_tokens=True)\r\n print(response)\n\n### Expected behavior\n\nI expect that llama just response the answer. for example,\r\ninput is \"[INST] how are you [/INST]\", output \"**I am fine**\"\r\nbut it repeat the input, the output is \"**[INST] how are you [/INST] I am fine**\"\r\n\n\n### System Info\n\n_No response_\n\n### Others\n\nDo you have any suggestions? This behaviour will limit the speed of the output and I wonder why this happen?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '18c6e6fea9dcc77c03b36301efe2025a87e177d5', 'files': [{'path': 'src/llmtuner/chat/chat_model.py', 'Loc': {\"('ChatModel', 'chat', 88)\": {'mod': [102]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/chat/chat_model.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "13eb365eb768f30d46967dd5ba302ab1106a96b6", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1443", "iss_label": "solved", "title": "deepspeed\u8fdb\u884c\u4e00\u673a\u516b\u5361\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u76f4\u63a5\u914d\u7f6e--checkpoint_dir\u53c2\u6570\uff0c\u4ec5\u52a0\u8f7dmodel\u6743\u91cd\uff0c\u65e0\u6cd5\u52a0\u8f7doptimizer\u6743\u91cd", "body": "\u5728\u914d\u7f6ebaichuan2\u6a21\u578b\u5728\u56fa\u5b9a `step` \u8fdb\u884c\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u5e0c\u671b\u540c\u65f6\u52a0\u8f7dmp_rank_00_model_states.pt\u4ee5\u53cazero_pp_rank_*_mp_rank_00_optim_states.pt\r\n\r\n\u7136\u800c\uff0c\u5728\u4f7f\u7528\u5982\u4e0b\u547d\u4ee4 `--checkpoint_dir` \u542f\u52a8\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u5e76\u6ca1\u6709\u8f7d\u5165\u4f18\u5316\u5668zero_pp_rank_*_mp_rank_00_optim_states.pt\r\n`\r\ndeepspeed --num_gpus ${NUM_GPUS_PER_WORKER} src/train_bash.py \\\r\n --stage sft \\\r\n --model_name_or_path /xxxxxxxxxx/model_weight \\\r\n --deepspeed ./ds_config.json \\\r\n --do_train \\\r\n --dataset alpaca_gpt4_en \\\r\n --template default \\\r\n --checkpoint_dir /xxxxxxxxxxxxxxxxx/output_sft/checkpoint-1 \\\r\n --finetuning_type full \\\r\n --output_dir ./output_sft \\\r\n --overwrite_cache \\\r\n --overwrite_output_dir \\\r\n --per_device_train_batch_size 2 \\\r\n --gradient_accumulation_steps 16 \\\r\n --lr_scheduler_type cosine \\\r\n --logging_steps 1 \\\r\n --save_steps 10000 \\\r\n --learning_rate 5e-5 \\\r\n --num_train_epochs 10.0 \\\r\n --plot_loss \\\r\n --fp16 | tee logs/train_g16_lr5e.log\r\n`\r\n\r\n\r\n\u8bf7\u95ee\u5982\u4f55\u624d\u80fd\u987a\u5229\u52a0\u8f7d\u6240\u6709\u6743\u91cd\u4e0e\u72b6\u6001\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '13eb365eb768f30d46967dd5ba302ab1106a96b6', 'files': [{'path': 'src/llmtuner/tuner/sft/workflow.py', 'Loc': {\"(None, 'run_sft', 19)\": {'mod': [67]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/tuner/sft/workflow.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "5377d0bf95f2fc79b75b253e956a7945f3030ad3", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/908", "iss_label": "solved", "title": "\u8bc4\u4f30\u6307\u6807\u9664\u4e86BLEU \u5206\u6570\u548c\u6c49\u8bed ROUGE \u5206\u6570\u8fd8\u80fd\u4f7f\u7528\u5176\u4ed6\u7684\u8bc4\u4f30\u6307\u6807\u5417\uff1f", "body": "\u6211\u60f3\u628a\u6a21\u578b\u7528\u4e8e\u610f\u56fe\u8bcd\u69fd\u7684\u63d0\u53d6\uff0c\u4e00\u822c\u8fd9\u4e2a\u4efb\u52a1\u7684\u8bc4\u4ef7\u6307\u6807\u662f\u51c6\u786e\u7387\u548cF1 score\u7b49\uff0c\u8bf7\u95ee\u5728\u8fd9\u4e2a\u9879\u76ee\u91cc\u80fd\u4f7f\u7528\u51c6\u786e\u7387\u548cF1 score\u4f5c\u4e3a\u8bc4\u4ef7\u6307\u6807\u5417\uff1f\u5e94\u8be5\u600e\u4e48\u505a\u5462\uff1f\u8c22\u8c22\u5927\u4f6c\u89e3\u7b54~", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5377d0bf95f2fc79b75b253e956a7945f3030ad3', 'files': [{'path': 'src/llmtuner/tuner/sft/metric.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/tuner/sft/metric.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "93809d1c3b73898a89cbdd99061eeeed5fd4f6a7", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1120", "iss_label": "solved", "title": "\u7cfb\u7edf\u63d0\u793a\u8bcd", "body": "\u60f3\u8bf7\u6559\u4e0b\u5927\u4f6c\uff0c\u201c\u7cfb\u7edf\u63d0\u793a\u8bcd\uff08\u975e\u5fc5\u586b\uff09\u201c\u6846\u4f20\u5165\u7684\u5185\u5bb9\u600e\u4e48\u8f93\u5165\u7ed9\u6a21\u578b\u7684\uff0c\u600e\u4e48\u548c\u201d\u8f93\u5165\u3002\u3002\u201c\u6846\u4f20\u5165\u7684\u5185\u5bb9\u62fc\u63a5\u7684\uff1f\u5bf9\u5e94\u7684\u4ee3\u7801\u5728\u54ea\u91cc\uff1f\r\n\r\n\u611f\u8c22\u611f\u8c22", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '93809d1c3b73898a89cbdd99061eeeed5fd4f6a7', 'files': [{'path': 'src/llmtuner/extras/template.py', 'Loc': {\"('Template', '_encode', 93)\": {'mod': [109]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/extras/template.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "757564caa1a0e83d184100604e43efe3c5030c0e", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/2584", "iss_label": "solved", "title": "\u8bf7\u6559llama pro\u5e94\u8be5\u600e\u4e48\u7528\uff1f\u662f\u53ef\u4ee5\u7528\u6765\u5fae\u8c03\u5417\uff1f", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### Reproduction\n\n\u8bf7\u6559llama pro\u5e94\u8be5\u600e\u4e48\u7528\uff1f\u662f\u53ef\u4ee5\u7528\u6765\u505apt\u548cSFT\u5417\uff1f\n\n### Expected behavior\n\n_No response_\n\n### System Info\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '757564caa1a0e83d184100604e43efe3c5030c0e', 'files': [{'path': 'tests/llama_pro.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tests/llama_pro.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "e678c1ccb2583e7b3e9e5bf68b58affc1a71411c", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/5011", "iss_label": "solved", "title": "Compute_Accuracy", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n![image](https://github.com/user-attachments/assets/4847743e-e25b-4136-a3f4-43a3e7335f80)\r\n\r\nI'm curious about this metrics for and how could i use this? and when? ( ComputeAccuracy )\r\n\r\n![image](https://github.com/user-attachments/assets/672f14bb-c812-45fe-ad77-d3c66f660ce5)\r\nand I saw llama-factory paper's metrics ( multi-choice ) and I wonder if this metrics are match with ComputeAccuracy\r\n\r\nanyone can answer me ?\r\n\r\nplease tell me how can i use this metrics, give me some example commands \r\n\r\nthank you! \n\n### Reproduction\n\n. \n\n### Expected behavior\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e678c1ccb2583e7b3e9e5bf68b58affc1a71411c', 'files': [{'path': 'examples/train_lora/llama3_lora_eval.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "\u914d\u7f6e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["examples/train_lora/llama3_lora_eval.yaml"], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "955e01c038ccc708def77f392b0e342f2f51dc9b", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4787", "iss_label": "solved", "title": "\u5168\u91cf\u5fae\u8c03BaiChuan2-7B-Chat\u7684yaml\u6587\u4ef6\u4e2d\u5982\u4f55\u4fee\u6539\u8d85\u53c2\u6570\u80fd\u5728\u4e09\u5f20A6000\u4e0a\u8fd0\u884c", "body": "### Reminder\r\n\r\n- [X] I have read the README and searched the existing issues.\r\n\r\n### System Info\r\n\r\n- `llamafactory` version: 0.8.2.dev0\r\n- Platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.8.19\r\n- PyTorch version: 2.3.0+cu121 (GPU)\r\n- Transformers version: 4.41.2\r\n- Datasets version: 2.20.0\r\n- Accelerate version: 0.31.0\r\n- PEFT version: 0.11.1\r\n- TRL version: 0.9.4\r\n- GPU type: NVIDIA RTX A6000\r\n- DeepSpeed version: 0.14.0\r\n- vLLM version: 0.4.3\r\n\r\n### Reproduction\r\n```yaml\r\n### model\r\nmodel_name_or_path: /data/Baichuan2-7B-Chat\r\n\r\n### method\r\nstage: sft\r\ndo_train: true\r\nfinetuning_type: full\r\n\r\n### ddp\r\nddp_timeout: 180000000\r\ndeepspeed: examples/deepspeed/ds_z3_config.json\r\n\r\n### dataset\r\ndataset: entity\r\ntemplate: baichuan2\r\ncutoff_len: 1024\r\nmax_samples: 1000\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/baichuan2-7b/full/sft\r\nlogging_steps: 10\r\nsave_steps: 500\r\nplot_loss: true\r\noverwrite_output_dir: true\r\n\r\n### train\r\nper_device_train_batch_size: 1\r\ngradient_accumulation_steps: 2\r\nlearning_rate: 1.0e-4\r\nnum_train_epochs: 3.0\r\nlr_scheduler_type: cosine\r\nwarmup_ratio: 0.1\r\npure_bf16: true\r\n\r\n### eval\r\nval_size: 0.1\r\nper_device_eval_batch_size: 1\r\neval_strategy: steps\r\neval_steps: 500\r\n```\r\n### Expected behavior\r\n\r\n\u60a8\u7684\u9879\u76ee\u4e2d\u7ed9\u51fa7B\u6a21\u578b\u80fd\u5728120G\u7684\u663e\u5b58\u4e0a\u8fd0\u884c\uff0c\u73b0\u5728\u6211\u57283\u5f20A6000\u4e0a\u8fd0\u884c\u4f1a\u51fa\u73b0OOM\uff0c\u5e0c\u671b\u60a8\u80fd\u544a\u8bc9\u6211\u600e\u4e48\u4fee\u6539\u8d85\u53c2\u6570\u80fd\u8ba9\u5b83\u8dd1\u8d77\u6765\u3002\u6211\u4e5f\u53c2\u8003\u4e86\u4e4b\u524d\u7684issue\uff0c\u8bbe\u7f6e\u4e86pure_bf16\uff0c\u4ecd\u7136\u4e0d\u80fd\u8fd0\u884c\u3002\r\n\r\n### Others\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '955e01c038ccc708def77f392b0e342f2f51dc9b', 'files': [{'path': 'examples/deepspeed/ds_z3_offload_config.json', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/deepspeed/ds_z3_offload_config.json"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "955e01c038ccc708def77f392b0e342f2f51dc9b", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4803", "iss_label": "solved", "title": "predict_oom", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\nmodel_name_or_path: llm/Qwen2-72B-Instruct\r\n# adapter_name_or_path: saves/qwen2_7b_errata_0705/lora_ace04_instruction_v1_savesteps_10/sft\r\n\r\n### method\r\nstage: sft\r\ndo_predict: true\r\nfinetuning_type: lora\r\n\r\n### dataset\r\ndataset: prompt_to_get_cot_normal\r\ntemplate: qwen\r\ncutoff_len: 2048\r\nmax_samples: 1000\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/qwen2_72b_errata_0712/lora/predict\r\noverwrite_output_dir: true\r\n\r\n### eval\r\nper_device_eval_batch_size: 1\r\npredict_with_generate: true\r\nddp_timeout: 180000000\n\n### Reproduction\n\n8\u5361A100 80G \u5728 72b \u7684\u57fa\u5ea7 predict 1k\u7684\u6570\u636e\u663e\u793aoom, \u6240\u6709\u7684\u663e\u5361\u540c\u65f6\u52a0\u8f7d\u6574\u4e2a\u6a21\u578b\u53c2\u6570, \u5bfc\u81f4oom\r\n\u636e\u5b98\u65b9 160G \u5373\u53ef, \u6211\u8fd980*8 \u90fd\u4e0d\u591f, \u8bf7\u95ee\u662fbug\u8fd8\u662f\u9700\u8981\u8bbe\u7f6e\u4ec0\u4e48\u53c2\u6570;\n\n### Expected behavior\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '955e01c038ccc708def77f392b0e342f2f51dc9b', 'files': [{'path': 'Examples/train_lora/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3\n\u7528\u6237\u914d\u7f6e\u9519\u8bef", "loc_way": "comment", "loc_scope": "3", "info_type": "config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Examples/train_lora/"]}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "3f11ab800f7dcf4b61a7c72ead4e051db11a8091", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4178", "iss_label": "solved", "title": "glm-4-9b-chat-1m do_predict\u5f97\u5230\u7684generated_predictions.jsonl\u4e2d\u7684label\u51fa\u73b0\u4e86\\n\u548c\u4e00\u4e9b\u975e\u6570\u636e\u96c6\u4e2d\u7684\u7ed3\u679c\u3002", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\nllamafactory 0.7.2.dev0\r\nPython 3.10.14\r\nubuntu 20.04\n\n### Reproduction\n\n$llamafactory-cli train glm_predict.yaml\r\n\r\ngenerated_predictions.jsonl \u8f93\u51fa\r\n{\"label\": \"\\n[S,137.0]\", \"predict\": \"\\n[S,137.0]\"}\r\n{\"label\": \"\\n\", \"predict\": \"\\n\"}\r\n{\"label\": \"\\n\", \"predict\": \"\\n\"}\r\n{\"label\": \"\\n[S,593\", \"predict\": \"\\n[S,593\"}\r\n{\"label\": \"\\n[H,593\", \"predict\": \"\\n[S,593\"}\r\n\r\nglm_predict.yaml \u5185\u5bb9\r\n### model\r\nmodel_name_or_path: ./THUDM_glm-4-9b-chat-1m\r\nadapter_name_or_path: saves/glm/lora/sft\r\n\r\n### method\r\nstage: sft\r\ndo_predict: true\r\nfinetuning_type: lora\r\n\r\n### dataset\r\ndataset: data_v0.1\r\ntemplate: glm4\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/glm/lora/predict\r\n\r\n### eval\r\nper_device_eval_batch_size: 4\r\npredict_with_generate: true\r\n\r\n\r\n\r\n\n\n### Expected behavior\n\n\u671f\u671b\u8f93\u51fa\r\ngenerated_predictions.jsonl \u8f93\u51fa\r\n{\"label\": \"[S,137.0]\", \"predict\": \"[S,137.0]\"}\r\n{\"label\": \"[S,593]\", \"predict\": \"[S,593]\"}\r\n{\"label\": \"[H,593]\", \"predict\": \"[S,593]\"}\r\n\r\n\r\n\u53ea\u5305\u542b\"\\n\"\u7684\u7ed3\u679c\u90fd\u6ca1\u6709\u51fa\u73b0\u5728\u6570\u636e\u96c6\u4e2d\u3002\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '3f11ab800f7dcf4b61a7c72ead4e051db11a8091', 'files': [{'path': 'src/llamafactory/data/template.py', 'Loc': {'(None, None, None)': {'mod': [663, 664]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llamafactory/data/template.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "d46c136c0e104c50999df18a88c42658b819f71f", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/230", "iss_label": "solved", "title": "\u4f7f\u7528\u672c\u9879\u76ee\u8bad\u7ec3baichuan-13b\u4e4b\u540e\uff0c\u5982\u4f55\u5728baichuan-13b\u4e2d\u52a0\u8f7d\u8bad\u7ec3\u5b8c\u7684\u6a21\u578b", "body": "\u8bad\u7ec3\u5b8c\u6210\u540e\u5982\u4f55\u5e94\u8be5\u5982\u4f55\u5728baichuan-13b\u7684\u9879\u76ee\u4e2d\u4fee\u6539\u52a0\u8f7d\u8bad\u7ec3\u5b8c\u6210\u540e\u7684\u6a21\u578b\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd46c136c0e104c50999df18a88c42658b819f71f', 'files': [{'path': 'src/export_model.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/export_model.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "024b0b1ab28d3c3816f319370ed79a4f26d40edf", "is_iss": 1, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1995", "iss_label": "solved", "title": "Phi-1.5\u8dd1RM lora \u51fa\u73b0'NoneType' object is not subcriptable", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### Reproduction\n\n\r\n\r\nsh\u811a\u672c\uff1a\r\n```\r\ndeepspeed --num_gpus 8 --master_port=9901 src/train_bash.py \\\r\n --stage rm \\\r\n --model_name_or_path Phi-1.5 \\\r\n --deepspeed ds_config.json \\ \r\n --adapter_name_or_path sft_lora \\ \r\n --create_new_adapter \\\r\n --do_train \\ \r\n --dataset comparision_gpt4_en \\\r\n --template default \\\r\n --finetuning_type lora \\\r\n --lora_target Wqkv \\ \r\n --overwrite_ouput_dir \\ \r\n --output_dir rm_lora \\ \r\n --per_device_train_batch_size 2 \\\r\n --gradient_accumulation_steps 4 \\\r\n --lr_scheduler_type cosine \\ \r\n --logging_steps 1 \\\r\n --save_steps 200 \\ \r\n --learning_rate 1e-6 \\ \r\n --num_train_epochs 1.0 \\ \r\n --max_steps 200 \\\r\n --fp16 > rm.log 2>&1 &\r\nwait\r\n \r\n```\n\n### Expected behavior\n\n\u671f\u671b\u7ed3\u679c\uff1a\u6210\u529f\u8bfb\u53d6\u6743\u91cd\u5e76\u8fdb\u5165\u8bad\u7ec3\n\n### System Info\n\n\u8bbe\u5907\uff1aNPU\r\n\u5305\u7248\u672c\uff1a\r\n```\r\ntransformers==4.36.1\r\ndeepspeed==0.12.4\r\npeft==0.7.1\r\ntrl==0.7.4\r\ntorch==2.1.0\r\naccelerate==0.25.0\r\n```\n\n### Others\n\n\u62a5\u9519\u4fe1\u606f\uff1a\r\nTraceback\r\n File \"src/train_bash.py\" line 14\r\n main()\r\nFile \"src/train_bash.py\" line 5\r\n run_exp()\r\nFile \"LLaMA-Factory/src/llmtuner/train/tuner.py\", line 28 in run_exp\r\n run_rm(model_args, data_args, training_args, finetuning_args, callbacks)\r\nFile \"LLaMA-Factory/src/llmtuner/train/rm/workflow.py\", line 50, in run_rm\r\n train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)\r\nFile \"transformers/trainer.py\" line 2728\r\n loss = self.compute_loss(model, inputs)\r\nFile \"LLaMA-Factory/src/llmtuner/train/rm/trainer.py\" in line 41, in compute_loss\r\n _, _, values = model(**inputs, output_hidden_states=True, return_dict=True)\r\nFile \".../trl/models/modeling_value_head.py\", in line 175. in forward\r\n last_hidden_state = base_model_output.hidden_state[-1]\r\nTypeError: 'NoneType' object is not subscriptable\r\n\r\n\u6211\u4e00\u5f00\u59cb\u6000\u7591\u662f\u6743\u91cd\u95ee\u9898\uff0c\u91cd\u65b0\u4e0b\u8f7d\u4e86\u6743\u91cd\u4f9d\u7136\u62a5\u8be5\u9519\u8bef\uff0c\u5c1d\u8bd5\u5c06Phi-1.5\u6362\u6210Phi-2\u540c\u6837\u62a5\u9519\r\n\r\n \r\n \r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "docs", "pro": "transformers", "path": ["model_doc/phi"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["model_doc/phi"], "test": [], "config": [], "asset": []}} -{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "d46c136c0e104c50999df18a88c42658b819f71f", "is_iss": 0, "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/226", "iss_label": "solved", "title": "\u8bf7\u95ee\u9879\u76ee\u4e2d\u5bf9\u591a\u8f6e\u5bf9\u8bdd\u8bed\u6599\u7684\u5904\u7406\u65b9\u5f0f", "body": "\u662f\u7528\u591a\u4e2a\u5386\u53f2\u5bf9\u8bdd\u62fc\u63a5\u540e\u4f5c\u4e3ainput\u6765\u9884\u6d4b\u6700\u540e\u4e00\u8f6e\u7684\u56de\u7b54\u5417\uff1f\u8fd8\u662f\u628a\u5386\u53f2\u5bf9\u8bdd\u62c6\u5206\u6210\u591a\u4e2a\u8f6e\u6b21\u7684\u8bad\u7ec3\u8bed\u6599\u6bd4\u59825\u8f6e\u6b21\u5bf9\u8bdd\u53ef\u4ee5\u62c6\u5206\u62101 2 3 4 5\u8f6e\u6b21\u5bf9\u8bdd\u6837\u672c\u3002\u5173\u4e8e\u5177\u4f53\u7684\u5904\u7406\u8fc7\u7a0b\u4ee3\u7801 \u80fd\u5426\u8bf7\u4f5c\u8005\u6307\u51fa\u4e00\u4e0b \u6211\u60f3\u5b66\u4e60\u5b66\u4e60\u3002\u8c22\u8c22\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd46c136c0e104c50999df18a88c42658b819f71f', 'files': [{'path': 'src/llmtuner/dsets/preprocess.py', 'Loc': {\"(None, 'preprocess_supervised_dataset', 50)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/dsets/preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "2f3aab9cfdc139f399387dbb90300d5a8bf8d2f1", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/375", "iss_label": "bug", "title": "ValueError: Requested tokens exceed context window of 1000", "body": "After I ingest a file, run privateGPT and try to ask anything, I get following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Stable_Diffusion\\privateGPT\\privateGPT.py\", line 75, in \r\n main()\r\n File \"C:\\Stable_Diffusion\\privateGPT\\privateGPT.py\", line 47, in main\r\n res = qa(query)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\retrieval_qa\\base.py\", line 120, in _call\r\n answer = self.combine_documents_chain.run(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 239, in run\r\n return self(kwargs, callbacks=callbacks)[self.output_keys[0]]\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\combine_documents\\base.py\", line 84, in _call\r\n output, extra_return_dict = self.combine_docs(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\combine_documents\\stuff.py\", line 87, in combine_docs\r\n return self.llm_chain.predict(callbacks=callbacks, **inputs), {}\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 213, in predict\r\n return self(kwargs, callbacks=callbacks)[self.output_key]\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 69, in _call\r\n response = self.generate([inputs], run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 79, in generate\r\n return self.llm.generate_prompt(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 134, in generate_prompt\r\n return self.generate(prompt_strings, stop=stop, callbacks=callbacks)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 191, in generate\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 185, in generate\r\n self._generate(prompts, stop=stop, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 405, in _generate\r\n self._call(prompt, stop=stop, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\llamacpp.py\", line 225, in _call\r\n for token in self.stream(prompt=prompt, stop=stop, run_manager=run_manager):\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\llamacpp.py\", line 274, in stream\r\n for chunk in result:\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\llama_cpp\\llama.py\", line 618, in _create_completion\r\n raise ValueError(\r\nValueError: Requested tokens exceed context window of 1000\r\n```\r\n\r\nI tried it with docx and pdf, used models are ggml-vic13b-q5_1.bin and stable-vicuna-13B.ggml.q4_0.bin.\r\nDuring ingestion or loading privateGPT I get no error.\r\n\r\nOS: Windows 10\r\nCPU: Ryzen 7 3700\r\nRAM: 32gb\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2f3aab9cfdc139f399387dbb90300d5a8bf8d2f1', 'files': [{'path': 'ingest.py', 'Loc': {\"(None, 'process_documents', 114)\": {'mod': [124]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nor\n1", "info_type": "Code"}, "loctype": {"code": ["ingest.py"], "doc": [], "test": [], "config": [".env"], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "24cfddd60f74aadd2dade4c63f6012a2489938a1", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1125", "iss_label": "", "title": "LLM mock donot gives any output", "body": "i have downloaded the llm models \r\n\r\nused this \r\npoetry install --with local\r\npoetry run python scripts/setup\r\n\r\nstill i get this output \r\n![Screenshot from 2023-10-27 23-50-34](https://github.com/imartinez/privateGPT/assets/148402457/201b18f9-c269-40e4-99c5-a22fd3b9366d)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '24cfddd60f74aadd2dade4c63f6012a2489938a1', 'files': [{'path': 'settings.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["settings.yaml"], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "dd1100202881a01b6b013b7bc1faad8b5c63fec9", "is_iss": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/850", "iss_label": "primordial", "title": "privateGPT\u4e2d\u6587\u63d0\u95ee\u663e\u793atoken\u8d85\u51fa\u9650\u5236\uff0c\u82f1\u6587\u63d0\u95ee\u4e0d\u5b58\u5728\u8fd9\u4e2a\u95ee\u9898", "body": "\r\ntoken\u7684\u8ba1\u7b97\u65b9\u5f0f\u5f88\u5947\u602a\u4e94\u4e2a\u5b57\u6307\u4ee4\u7684token\u6bd4\u4e03\u4e2a\u5b57\u591a\r\n![\u5fae\u4fe1\u56fe\u7247_20230713094810](https://github.com/imartinez/privateGPT/assets/139415035/6346ae1f-9c65-4721-b7dd-a176fc9be4e1)\r\n![\u5fae\u4fe1\u56fe\u7247_20230713094822](https://github.com/imartinez/privateGPT/assets/139415035/60f2d272-8a80-48d7-9032-4d915a83aa7d)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "4", "loc_way": "comment", "loc_scope": "1", "info_type": "config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "d17c34e81a84518086b93605b15032e2482377f7", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1724", "iss_label": "", "title": "Error in Model Download and Tokenizer Fetching During Setup Script Execution", "body": "### Environment\r\nOperating System: Macbook Pro M1\r\nPython Version: 3.11\r\n\r\nDescription\r\nI'm encountering an issue when running the setup script for my project. The script is supposed to download an embedding model and an LLM model from Hugging Face, followed by their respective tokenizers. While the script successfully downloads the embedding and LLM models, it fails when attempting to download the tokenizer with a 404 Client Error.\r\n\r\n### Steps to Reproduce\r\nRun `poetry run python scripts/setup`\r\nEmbedding model (BAAI/bge-small-en-v1.5) and the LLM model (mistral-7b-instruct-v0.2.Q4_K_M.gguf) are downloaded successfully.\r\nThe script then attempts to download a tokenizer and fails.\r\n\r\n### Actual Behavior\r\nThe script throws an error when trying to download the tokenizer. The error message indicates a 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json. This suggests that either the tokenizer's name is not being correctly passed (as indicated by the 'None' in the URL) or there's an issue with the tokenizer's availability on Hugging Face.\r\n\r\n### Logs\r\n```bash\r\n22:02:47.207 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default']\r\nDownloading embedding BAAI/bge-small-en-v1.5\r\nFetching 14 files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 14/14 [00:00<00:00, 14.81it/s]\r\nEmbedding model downloaded!\r\nDownloading LLM mistral-7b-instruct-v0.2.Q4_K_M.gguf\r\nLLM model downloaded!\r\nDownloading tokenizer None\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/utils/hub.py\", line 398, in cached_file\r\n resolved_file = hf_hub_download(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1374, in hf_hub_download\r\n raise head_call_error\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1247, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1624, in get_hf_file_metadata\r\n r = _request_wrapper(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 402, in _request_wrapper\r\n response = _request_wrapper(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 426, in _request_wrapper\r\n hf_raise_for_status(response)\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 320, in hf_raise_for_status\r\n raise RepositoryNotFoundError(message, response) from e\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-65f21479-09e39977255bdb72502d4b8c;66371627-2d02-44c6-8f25-d115820c1986)\r\n\r\nRepository Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/scripts/setup\", line 43, in \r\n AutoTokenizer.from_pretrained(\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 767, in from_pretrained\r\n tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 600, in get_tokenizer_config\r\n resolved_config_file = cached_file(\r\n ^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/utils/hub.py\", line 421, in cached_file\r\n raise EnvironmentError(\r\nOSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=`\r\n```\r\n### Additional Information\r\nIt seems like the script is not correctly fetching the name or identifier for the tokenizer.\r\nThe issue might be related to how the tokenizer's name is being resolved or passed in the script (None).\r\nI also tried with docker compose, yielding same results. Maybe it is just some setting that I am missing from the docs?\r\n\r\nThank you\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd17c34e81a84518086b93605b15032e2482377f7', 'files': [{'path': 'settings.yaml', 'Loc': {'(None, None, 42)': {'mod': [42]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["settings.yaml"], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "026b9f895cfb727da523a20c59773146801236ba", "is_iss": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/13", "iss_label": "", "title": "gpt_tokenize: unknown token '?'", "body": "gpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\n[1] 32658 killed python3 privateGPT.py", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3\n\uff1f\uff1f\uff1f", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "d3acd85fe34030f8cfd7daf50b30c534087bdf2b", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1514", "iss_label": "", "title": "LLM Chat only returns \"#\" characters", "body": "No matter the prompt, privateGPT only returns hashes as the response. This doesn't occur when not using CUBLAS. \r\n\r\n\"image\"\r\n\r\nSet up info:\r\n\r\nNVIDIA GeForce RTX 4080\r\nWindows 11\r\n\r\n\"image\"\r\n\r\n\"image\"\r\n\r\n\r\n\r\naccelerate==0.25.0\r\naiofiles==23.2.1\r\naiohttp==3.9.1\r\naiosignal==1.3.1\r\naiostream==0.5.2\r\naltair==5.2.0\r\nannotated-types==0.6.0\r\nanyio==3.7.1\r\nattrs==23.1.0\r\nbeautifulsoup4==4.12.2\r\nblack==22.12.0\r\nboto3==1.34.2\r\nbotocore==1.34.2\r\nbuild==1.0.3\r\nCacheControl==0.13.1\r\ncertifi==2023.11.17\r\ncfgv==3.4.0\r\ncharset-normalizer==3.3.2\r\ncleo==2.1.0\r\nclick==8.1.7\r\ncolorama==0.4.6\r\ncoloredlogs==15.0.1\r\ncontourpy==1.2.0\r\ncoverage==7.3.3\r\ncrashtest==0.4.1\r\ncycler==0.12.1\r\ndataclasses-json==0.5.14\r\ndatasets==2.14.4\r\nDeprecated==1.2.14\r\ndill==0.3.7\r\ndiskcache==5.6.3\r\ndistlib==0.3.8\r\ndistro==1.8.0\r\ndnspython==2.4.2\r\ndulwich==0.21.7\r\nemail-validator==2.1.0.post1\r\nevaluate==0.4.1\r\nfastapi==0.103.2\r\nfastjsonschema==2.19.1\r\nffmpy==0.3.1\r\nfilelock==3.13.1\r\nflatbuffers==23.5.26\r\nfonttools==4.46.0\r\nfrozenlist==1.4.1\r\nfsspec==2023.12.2\r\ngradio==4.10.0\r\ngradio_client==0.7.3\r\ngreenlet==3.0.2\r\ngrpcio==1.60.0\r\ngrpcio-tools==1.60.0\r\nh11==0.14.0\r\nh2==4.1.0\r\nhpack==4.0.0\r\nhttpcore==1.0.2\r\nhttptools==0.6.1\r\nhttpx==0.25.2\r\nhuggingface-hub==0.19.4\r\nhumanfriendly==10.0\r\nhyperframe==6.0.1\r\nidentify==2.5.33\r\nidna==3.6\r\nimportlib-resources==6.1.1\r\niniconfig==2.0.0\r\ninjector==0.21.0\r\ninstaller==0.7.0\r\nitsdangerous==2.1.2\r\njaraco.classes==3.3.0\r\nJinja2==3.1.2\r\njmespath==1.0.1\r\njoblib==1.3.2\r\njsonschema==4.20.0\r\njsonschema-specifications==2023.11.2\r\nkeyring==24.3.0\r\nkiwisolver==1.4.5\r\nllama-index==0.9.3\r\nllama_cpp_python==0.2.29\r\nmarkdown-it-py==3.0.0\r\nMarkupSafe==2.1.3\r\nmarshmallow==3.20.1\r\nmatplotlib==3.8.2\r\nmdurl==0.1.2\r\nmore-itertools==10.2.0\r\nmpmath==1.3.0\r\nmsgpack==1.0.7\r\nmultidict==6.0.4\r\nmultiprocess==0.70.15\r\nmypy==1.7.1\r\nmypy-extensions==1.0.0\r\nnest-asyncio==1.5.8\r\nnetworkx==3.2.1\r\nnltk==3.8.1\r\nnodeenv==1.8.0\r\nnumpy==1.26.3\r\nonnx==1.15.0\r\nonnxruntime==1.16.3\r\nopenai==1.5.0\r\noptimum==1.16.1\r\norjson==3.9.10\r\npackaging==23.2\r\npandas==2.1.4\r\npathspec==0.12.1\r\npexpect==4.9.0\r\nPillow==10.1.0\r\npkginfo==1.9.6\r\nplatformdirs==4.1.0\r\npluggy==1.3.0\r\npoetry==1.7.1\r\npoetry-core==1.8.1\r\npoetry-plugin-export==1.6.0\r\nportalocker==2.8.2\r\npre-commit==2.21.0\r\n-e git+https://github.com/imartinez/privateGPT@d3acd85fe34030f8cfd7daf50b30c534087bdf2b#egg=private_gpt\r\nprotobuf==4.25.1\r\npsutil==5.9.6\r\nptyprocess==0.7.0\r\npyarrow==14.0.1\r\npydantic==2.5.2\r\npydantic-extra-types==2.2.0\r\npydantic-settings==2.1.0\r\npydantic_core==2.14.5\r\npydub==0.25.1\r\nPygments==2.17.2\r\npyparsing==3.1.1\r\npypdf==3.17.2\r\npyproject_hooks==1.0.0\r\npyreadline3==3.4.1\r\npytest==7.4.3\r\npytest-asyncio==0.21.1\r\npytest-cov==3.0.0\r\npython-dateutil==2.8.2\r\npython-dotenv==1.0.0\r\npython-multipart==0.0.6\r\npytz==2023.3.post1\r\npywin32==306\r\npywin32-ctypes==0.2.2\r\nPyYAML==6.0.1\r\nqdrant-client==1.7.0\r\nrapidfuzz==3.6.1\r\nreferencing==0.32.0\r\nregex==2023.10.3\r\nrequests==2.31.0\r\nrequests-toolbelt==1.0.0\r\nresponses==0.18.0\r\nrich==13.7.0\r\nrpds-py==0.14.1\r\nruff==0.1.8\r\ns3transfer==0.9.0\r\nsafetensors==0.4.1\r\nscikit-learn==1.3.2\r\nscipy==1.11.4\r\nsemantic-version==2.10.0\r\nsentence-transformers==2.2.2\r\nsentencepiece==0.1.99\r\nshellingham==1.5.4\r\nsix==1.16.0\r\nsniffio==1.3.0\r\nsoupsieve==2.5\r\nSQLAlchemy==2.0.23\r\nstarlette==0.27.0\r\nsympy==1.12\r\ntenacity==8.2.3\r\nthreadpoolctl==3.2.0\r\ntiktoken==0.5.2\r\ntokenizers==0.15.0\r\ntomlkit==0.12.0\r\ntoolz==0.12.0\r\ntorch==2.1.2+cu121\r\ntorchaudio==2.1.2+cu121\r\ntorchvision==0.16.2+cu121\r\ntqdm==4.66.1\r\ntransformers==4.36.1\r\ntrove-classifiers==2024.1.8\r\ntyper==0.9.0\r\ntypes-PyYAML==6.0.12.12\r\ntyping-inspect==0.9.0\r\ntyping_extensions==4.9.0\r\ntzdata==2023.3\r\nujson==5.9.0\r\nurllib3==1.26.18\r\nuvicorn==0.24.0.post1\r\nvirtualenv==20.25.0\r\nwatchdog==3.0.0\r\nwatchfiles==0.21.0\r\nwebsockets==11.0.3\r\nwrapt==1.16.0\r\nxxhash==3.4.1\r\nyarl==1.9.4", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd3acd85fe34030f8cfd7daf50b30c534087bdf2b', 'files': [{'path': 'private_gpt/components/llm/llm_component.py', 'Loc': {\"('LLMComponent', '__init__', 21)\": {'mod': [45]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["private_gpt/components/llm/llm_component.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "c4b247d696c727c1da6d993ce4f6c3a557e91b42", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/685", "iss_label": "enhancement\nprimordial", "title": "CPU utilization", "body": "CPU utilization appears to be capped at 20%\r\nIs there a way to increase CPU utilization and thereby enhance performance?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c4b247d696c727c1da6d993ce4f6c3a557e91b42', 'files': [{'path': 'privateGPT.py', 'Loc': {\"(None, 'main', 23)\": {'mod': [36]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["privateGPT.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "7ae80e662936bd946a231d1327bde476556c5d61", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/181", "iss_label": "primordial", "title": "Segfault : not enough space in the context's memory pool", "body": "ggml_new_tensor_impl: not enough space in the context's memory pool (needed 3779301744, available 3745676000)\r\nzsh: segmentation fault python3.11 privateGPT.py\r\n\r\nWhats context memory pool? can i configure it? i actually have a lot of excess memory", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7ae80e662936bd946a231d1327bde476556c5d61', 'files': [{'path': 'ingest.py', 'Loc': {\"(None, 'main', 37)\": {'mod': [47]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ingest.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "9d47d03d183685c675070d47ad3beb67446d6580", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/630", "iss_label": "bug\nprimordial", "title": "Use falcon model in privategpt", "body": "Hi how can i use Falcon model in privategpt?\r\n\r\nhttps://huggingface.co/tiiuae/falcon-40b-instruct\r\n\r\nThanks", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9d47d03d183685c675070d47ad3beb67446d6580', 'files': [{'path': 'privateGPT.py', 'Loc': {\"(None, 'main', 23)\": {'mod': [32]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["privateGPT.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "380b119581d2afcd24948f1108507b138490aec6", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/235", "iss_label": "bug\nprimordial", "title": "Need help on in some errors", "body": " File \"F:\\privateGPT\\Lib\\site-packages\\langchain\\embeddings\\llamacpp.py\", line 79, in validate_environment\r\n values[\"client\"] = Llama(\r\n ^^^^^^\r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama.py\", line 155, in __init__ \r\n self.ctx = llama_cpp.llama_init_from_file(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n \r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama_cpp.py\", line 182, in llama_init_from_file\r\n return _lib.llama_init_from_file(path_model, params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nOSError: [WinError -1073741795] Windows Error 0xc000001d\r\nDuring handling of the above exception, another exception occurred:\r\n\r\n File \"F:\\privateGPT\\ingest.py\", line 62, in \r\n main()\r\n File \"F:\\privateGPT\\ingest.py\", line 53, in main\r\n llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx) \r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \r\n File \"pydantic\\main.py\", line 339, in pydantic.main.BaseModel.__init__\r\n File \"pydantic\\main.py\", line 1102, in pydantic.main.validate_model\r\n File \"F:\\privateGPT\\Lib\\site-packages\\langchain\\embeddings\\llamacpp.py\", line 99, in validate_environment\r\n raise NameError(f\"Could not load Llama model from path: {model_path}\")\r\nNameError: Could not load Llama model from path: F:/privateGPT/models/ggml-model-q4_0.bin \r\nException ignored in: \r\nTraceback (most recent call last):\r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama.py\", line 978, in __del__\r\n if self.ctx is not None:\r\n ^^^^\r\nAttributeError: 'Llama' object has no attribute 'ctx'\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '380b119581d2afcd24948f1108507b138490aec6', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "b1057afdf8f65fdb10e4160adbd8462be0c08271", "is_iss": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/796", "iss_label": "primordial", "title": "Unable to instantiate model (type=value_error)", "body": "Installed on Ubuntu 20.04 with Python3.11-venv\r\n\r\nError on line 38:\r\nhttps://github.com/imartinez/privateGPT/blob/b1057afdf8f65fdb10e4160adbd8462be0c08271/privateGPT.py#L38C7-L38C7\r\n\r\nError:\r\n\r\nUsing embedded DuckDB with persistence: data will be stored in: db\r\nFound model file at models/ggml-gpt4all-j-v1.3-groovy.bin\r\nInvalid model file\r\nTraceback (most recent call last):\r\n File \"/home/kk/Documents/privateGPT/privateGPT.py\", line 83, in \r\n main()\r\n File \"/home/kk/Documents/privateGPT/privateGPT.py\", line 38, in main\r\n llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=False)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"pydantic/main.py\", line 341, in pydantic.main.BaseModel.__init__\r\npydantic.error_wrappers.ValidationError: 1 validation error for GPT4All\r\n__root__\r\n Unable to instantiate model (type=value_error)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ggml-gpt4all-j-v1.3-groovy.bin"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ggml-gpt4all-j-v1.3-groovy.bin"]}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "dd1100202881a01b6b013b7bc1faad8b5c63fec9", "is_iss": 1, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/839", "iss_label": "bug\nprimordial", "title": "ERROR: The prompt size exceeds the context window size and cannot be processed.", "body": "Enter a query\uff0c\r\nIt show:\r\n\r\nERROR: The prompt size exceeds the context window size and cannot be processed.GPT-J ERROR: The prompt is2614tokens and the context window is2048!\r\n\r\nERROR: The prompt size exceeds the context window size and cannot be processed.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} -{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "6bbec79583b7f28d9bea4b39c099ebef149db843", "is_iss": 0, "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1598", "iss_label": "", "title": "Performance bottleneck using GPU ", "body": "Hi Guys, \r\n\r\nI am running the default Mistral model, and when running queries I am seeing 100% CPU usage (so single core), and up to 29% GPU usage which drops to have 15% mid answer. \r\n\r\nI am using a MacBook Pro with M3 Max. I have set: model_kwargs={\"n_gpu_layers\": -1, \"offload_kqv\": True},\r\n\r\nI am curious as LM studio runs the same model with low CPU usage and 80%+ GPU", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6bbec79583b7f28d9bea4b39c099ebef149db843', 'files': [{'path': 'private_gpt/ui/ui.py', 'Loc': {\"('PrivateGptUi', 'yield_deltas', 81)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["private_gpt/ui/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "87ebab0615b1bf9b14b478b055e7059d630b4833", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/6007", "iss_label": "question", "title": "How to limit YouTube Music search to tracks only?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nIs there a way to return only tracks in a YTmusic search? Sometimes music videos have sound effects, while I'm only interested in the original song.\r\n\r\nI'm using this command:\r\n`yt-dlp -f bestaudio --playlist-items 1 --default-search \"https://music.youtube.com/search?q=\" -a list-of-tracks.txt`\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '87ebab0615b1bf9b14b478b055e7059d630b4833', 'files': [{'path': 'yt_dlp/extractor/youtube.py', 'Loc': {\"('YoutubeMusicSearchURLIE', None, 6647)\": {'mod': [6676]}}, 'status': 'modified'}, {'path': 'yt_dlp/extractor/youtube.py', 'Loc': {\"('YoutubeMusicSearchURLIE', None, 6647)\": {'mod': [6659]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "91302ed349f34dc26cc1d661bb45a4b71f4417f7", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/7436", "iss_label": "question", "title": "Is YT-DLP capacity of downloading/displaying Automatic caption?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\n**Similar Issue:**\r\n- #5733 \r\n\r\n--- \r\nI may have missed one or 2 that could answer my question. These discussions and answers are not clear to my understanding, so here i am \r\n\r\n**Details**\r\nThere are many videos that have \"auto-generated subtitles | automatic captions\" and no non-generated subtitles. I've ran `yt-dlp --list-subs URL` and discover that it said `URL has no subtitles`. \r\n\r\n**QUESTION:**\r\n1. Is it possible for yt-dlp to display the automatic caption while I am streaming the video to MPV? \r\n2. Does yt-dlp preferred \"non auto-generated caption\"? \r\n\r\nI'm not sure if this is intentional or not due to one discussion via issues that a guy mentioned that yt-dlp preferred non-autogenerated subtitles. \r\n\r\n**Command for using MPV with yt-dlp**\r\nthe command was `mpv \"https://youtu.be/i6kccBc-FBQ\" --ytdl-raw-options=write-auto-subs=,write-subs=,sub-lang=en`\r\n\r\nEDIT: added the double quote to the URL in the command line\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\r\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n```shell\r\n[debug] Command-line config: ['-vU', '--list-subs', 'https://youtu.be/i6kccBc-FBQ']\r\n[debug] Portable config \"C:\\Program Scoop\\apps\\yt-dlp\\current\\yt-dlp.conf\": []\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2023.06.22 [812cdfa06] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1851 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nAvailable version: stable@2023.06.22, Current version: stable@2023.06.22\r\nCurrent Build Hash: 37e7ffe204309357cfd1388b0e2c782a30e293ddd0f2761a9a8f6afa185b3566\r\nyt-dlp is up to date (stable@2023.06.22)\r\n[youtube] Extracting URL: https://youtu.be/i6kccBc-FBQ\r\n[youtube] i6kccBc-FBQ: Downloading webpage\r\n[youtube] i6kccBc-FBQ: Downloading ios player API JSON\r\n[youtube] i6kccBc-FBQ: Downloading android player API JSON\r\n[debug] Loading youtube-nsig.b7910ca8 from cache\r\n[debug] [youtube] Decrypted nsig ftRL4j1AuTut8ZV => WMPfJf_eWd71gQ\r\n[youtube] i6kccBc-FBQ: Downloading m3u8 information\r\n[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec:vp9.2, channels, acodec, lang, proto\r\n[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec:vp9.2(10), channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id\r\n[info] Available automatic captions for i6kccBc-FBQ:\r\nLanguage Name Formats\r\naf Afrikaans vtt, ttml, srv3, srv2, srv1, json3\r\nak Akan vtt, ttml, srv3, srv2, srv1, json3\r\nsq Albanian vtt, ttml, srv3, srv2, srv1, json3\r\nam Amharic vtt, ttml, srv3, srv2, srv1, json3\r\nar Arabic vtt, ttml, srv3, srv2, srv1, json3\r\nhy Armenian vtt, ttml, srv3, srv2, srv1, json3\r\nas Assamese vtt, ttml, srv3, srv2, srv1, json3\r\nay Aymara vtt, ttml, srv3, srv2, srv1, json3\r\naz Azerbaijani vtt, ttml, srv3, srv2, srv1, json3\r\nbn Bangla vtt, ttml, srv3, srv2, srv1, json3\r\neu Basque vtt, ttml, srv3, srv2, srv1, json3\r\nbe Belarusian vtt, ttml, srv3, srv2, srv1, json3\r\nbho Bhojpuri vtt, ttml, srv3, srv2, srv1, json3\r\nbs Bosnian vtt, ttml, srv3, srv2, srv1, json3\r\nbg Bulgarian vtt, ttml, srv3, srv2, srv1, json3\r\nmy Burmese vtt, ttml, srv3, srv2, srv1, json3\r\nca Catalan vtt, ttml, srv3, srv2, srv1, json3\r\nceb Cebuano vtt, ttml, srv3, srv2, srv1, json3\r\nzh-Hans Chinese (Simplified) vtt, ttml, srv3, srv2, srv1, json3\r\nzh-Hant Chinese (Traditional) vtt, ttml, srv3, srv2, srv1, json3\r\nco Corsican vtt, ttml, srv3, srv2, srv1, json3\r\nhr Croatian vtt, ttml, srv3, srv2, srv1, json3\r\ncs Czech vtt, ttml, srv3, srv2, srv1, json3\r\nda Danish vtt, ttml, srv3, srv2, srv1, json3\r\ndv Divehi vtt, ttml, srv3, srv2, srv1, json3\r\nnl Dutch vtt, ttml, srv3, srv2, srv1, json3\r\nen-orig English (Original) vtt, ttml, srv3, srv2, srv1, json3\r\nen English vtt, ttml, srv3, srv2, srv1, json3\r\neo Esperanto vtt, ttml, srv3, srv2, srv1, json3\r\net Estonian vtt, ttml, srv3, srv2, srv1, json3\r\nee Ewe vtt, ttml, srv3, srv2, srv1, json3\r\nfil Filipino vtt, ttml, srv3, srv2, srv1, json3\r\nfi Finnish vtt, ttml, srv3, srv2, srv1, json3\r\nfr French vtt, ttml, srv3, srv2, srv1, json3\r\ngl Galician vtt, ttml, srv3, srv2, srv1, json3\r\nlg Ganda vtt, ttml, srv3, srv2, srv1, json3\r\nka Georgian vtt, ttml, srv3, srv2, srv1, json3\r\nde German vtt, ttml, srv3, srv2, srv1, json3\r\nel Greek vtt, ttml, srv3, srv2, srv1, json3\r\ngn Guarani vtt, ttml, srv3, srv2, srv1, json3\r\ngu Gujarati vtt, ttml, srv3, srv2, srv1, json3\r\nht Haitian Creole vtt, ttml, srv3, srv2, srv1, json3\r\nha Hausa vtt, ttml, srv3, srv2, srv1, json3\r\nhaw Hawaiian vtt, ttml, srv3, srv2, srv1, json3\r\niw Hebrew vtt, ttml, srv3, srv2, srv1, json3\r\nhi Hindi vtt, ttml, srv3, srv2, srv1, json3\r\nhmn Hmong vtt, ttml, srv3, srv2, srv1, json3\r\nhu Hungarian vtt, ttml, srv3, srv2, srv1, json3\r\nis Icelandic vtt, ttml, srv3, srv2, srv1, json3\r\nig Igbo vtt, ttml, srv3, srv2, srv1, json3\r\nid Indonesian vtt, ttml, srv3, srv2, srv1, json3\r\nga Irish vtt, ttml, srv3, srv2, srv1, json3\r\nit Italian vtt, ttml, srv3, srv2, srv1, json3\r\nja Japanese vtt, ttml, srv3, srv2, srv1, json3\r\njv Javanese vtt, ttml, srv3, srv2, srv1, json3\r\nkn Kannada vtt, ttml, srv3, srv2, srv1, json3\r\nkk Kazakh vtt, ttml, srv3, srv2, srv1, json3\r\nkm Khmer vtt, ttml, srv3, srv2, srv1, json3\r\nrw Kinyarwanda vtt, ttml, srv3, srv2, srv1, json3\r\nko Korean vtt, ttml, srv3, srv2, srv1, json3\r\nkri Krio vtt, ttml, srv3, srv2, srv1, json3\r\nku Kurdish vtt, ttml, srv3, srv2, srv1, json3\r\nky Kyrgyz vtt, ttml, srv3, srv2, srv1, json3\r\nlo Lao vtt, ttml, srv3, srv2, srv1, json3\r\nla Latin vtt, ttml, srv3, srv2, srv1, json3\r\nlv Latvian vtt, ttml, srv3, srv2, srv1, json3\r\nln Lingala vtt, ttml, srv3, srv2, srv1, json3\r\nlt Lithuanian vtt, ttml, srv3, srv2, srv1, json3\r\nlb Luxembourgish vtt, ttml, srv3, srv2, srv1, json3\r\nmk Macedonian vtt, ttml, srv3, srv2, srv1, json3\r\nmg Malagasy vtt, ttml, srv3, srv2, srv1, json3\r\nms Malay vtt, ttml, srv3, srv2, srv1, json3\r\nml Malayalam vtt, ttml, srv3, srv2, srv1, json3\r\nmt Maltese vtt, ttml, srv3, srv2, srv1, json3\r\nmi M\u0101ori vtt, ttml, srv3, srv2, srv1, json3\r\nmr Marathi vtt, ttml, srv3, srv2, srv1, json3\r\nmn Mongolian vtt, ttml, srv3, srv2, srv1, json3\r\nne Nepali vtt, ttml, srv3, srv2, srv1, json3\r\nnso Northern Sotho vtt, ttml, srv3, srv2, srv1, json3\r\nno Norwegian vtt, ttml, srv3, srv2, srv1, json3\r\nny Nyanja vtt, ttml, srv3, srv2, srv1, json3\r\nor Odia vtt, ttml, srv3, srv2, srv1, json3\r\nom Oromo vtt, ttml, srv3, srv2, srv1, json3\r\nps Pashto vtt, ttml, srv3, srv2, srv1, json3\r\nfa Persian vtt, ttml, srv3, srv2, srv1, json3\r\npl Polish vtt, ttml, srv3, srv2, srv1, json3\r\npt Portuguese vtt, ttml, srv3, srv2, srv1, json3\r\npa Punjabi vtt, ttml, srv3, srv2, srv1, json3\r\nqu Quechua vtt, ttml, srv3, srv2, srv1, json3\r\nro Romanian vtt, ttml, srv3, srv2, srv1, json3\r\nru Russian vtt, ttml, srv3, srv2, srv1, json3\r\nsm Samoan vtt, ttml, srv3, srv2, srv1, json3\r\nsa Sanskrit vtt, ttml, srv3, srv2, srv1, json3\r\ngd Scottish Gaelic vtt, ttml, srv3, srv2, srv1, json3\r\nsr Serbian vtt, ttml, srv3, srv2, srv1, json3\r\nsn Shona vtt, ttml, srv3, srv2, srv1, json3\r\nsd Sindhi vtt, ttml, srv3, srv2, srv1, json3\r\nsi Sinhala vtt, ttml, srv3, srv2, srv1, json3\r\nsk Slovak vtt, ttml, srv3, srv2, srv1, json3\r\nsl Slovenian vtt, ttml, srv3, srv2, srv1, json3\r\nso Somali vtt, ttml, srv3, srv2, srv1, json3\r\nst Southern Sotho vtt, ttml, srv3, srv2, srv1, json3\r\nes Spanish vtt, ttml, srv3, srv2, srv1, json3\r\nsu Sundanese vtt, ttml, srv3, srv2, srv1, json3\r\nsw Swahili vtt, ttml, srv3, srv2, srv1, json3\r\nsv Swedish vtt, ttml, srv3, srv2, srv1, json3\r\ntg Tajik vtt, ttml, srv3, srv2, srv1, json3\r\nta Tamil vtt, ttml, srv3, srv2, srv1, json3\r\ntt Tatar vtt, ttml, srv3, srv2, srv1, json3\r\nte Telugu vtt, ttml, srv3, srv2, srv1, json3\r\nth Thai vtt, ttml, srv3, srv2, srv1, json3\r\nti Tigrinya vtt, ttml, srv3, srv2, srv1, json3\r\nts Tsonga vtt, ttml, srv3, srv2, srv1, json3\r\ntr Turkish vtt, ttml, srv3, srv2, srv1, json3\r\ntk Turkmen vtt, ttml, srv3, srv2, srv1, json3\r\nuk Ukrainian vtt, ttml, srv3, srv2, srv1, json3\r\nur Urdu vtt, ttml, srv3, srv2, srv1, json3\r\nug Uyghur vtt, ttml, srv3, srv2, srv1, json3\r\nuz Uzbek vtt, ttml, srv3, srv2, srv1, json3\r\nvi Vietnamese vtt, ttml, srv3, srv2, srv1, json3\r\ncy Welsh vtt, ttml, srv3, srv2, srv1, json3\r\nfy Western Frisian vtt, ttml, srv3, srv2, srv1, json3\r\nxh Xhosa vtt, ttml, srv3, srv2, srv1, json3\r\nyi Yiddish vtt, ttml, srv3, srv2, srv1, json3\r\nyo Yoruba vtt, ttml, srv3, srv2, srv1, json3\r\nzu Zulu vtt, ttml, srv3, srv2, srv1, json3\r\ni6kccBc-FBQ has no subtitles\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '91302ed349f34dc26cc1d661bb45a4b71f4417f7', 'files': [{'path': 'yt_dlp/options.py', 'Loc': {\"(None, 'create_parser', 216)\": {'mod': [853, 857, 861]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0\n\u8fd9\u4e2a\u53ef\u4e0d\u7b97\uff0c\u56e0\u4e3auser\u77e5\u9053\u547d\u4ee4\u53ea\u662f\u5f15\u53f7\u95ee\u9898", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/options.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "6075a029dba70a89675ae1250e7cdfd91f0eba41", "is_iss": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/10356", "iss_label": "question", "title": "Unable to install curl_cffi", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nI am trying to install `curl_cffi` in order to get around Vimeo's new TLS fingerprinting anti-bot protection. I have run the command `pipx install 'yt-dlp[default,curl_cffi]' --force'`, which gives the output:\r\n\r\n```\r\nInstalling to existing venv 'yt-dlp'\r\n\u26a0\ufe0f Note: yt-dlp was already on your PATH at /opt/homebrew/bin/yt-dlp\r\n installed package yt-dlp 2024.7.2, installed using Python 3.12.4\r\n These apps are now globally available\r\n - yt-dlp\r\n These manual pages are now globally available\r\n - man1/yt-dlp.1\r\n\u26a0\ufe0f Note: '/Users/username-hidden/.local/bin' is not on your PATH environment variable. These apps will not be globally accessible until your PATH is updated. Run `pipx ensurepath` to automatically add it,\r\n or manually modify your PATH in your shell's config file (e.g. ~/.bashrc).\r\ndone! \u2728 \ud83c\udf1f \u2728\r\n```\r\n\r\nFrom this output, I understand that `curl_cffi` would have been installed. However, running `yt-dlp --list-impersonate-targets -vU` does not show it.\r\n\r\nI intend to use `--impersonate chrome` but I am stuck at `curl_cffi` installation. Any help would be **greatly** appreciated. Thank you.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['--list-impersonate-targets', '-vU']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.07.02 from yt-dlp/yt-dlp [93d33cb29] (pip)\r\n[debug] Python 3.12.4 (CPython arm64 64bit) - macOS-14.5-arm64-arm-64bit (OpenSSL 3.3.1 4 Jun 2024)\r\n[debug] exe versions: ffmpeg 7.0.1 (setts), ffprobe 7.0.1\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.06.02, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.0, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1831 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.07.02 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.07.02 from yt-dlp/yt-dlp)\r\n[info] Available impersonate targets\r\nClient OS Source\r\n---------------------------------------\r\nChrome - curl_cffi (not available)\r\nEdge - curl_cffi (not available)\r\nSafari - curl_cffi (not available)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".zshrc", ".bash_profile"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".bash_profile", ".zshrc"], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "4a601c9eff9fb42e24a4c8da3fa03628e035b35b", "is_iss": 1, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/8479", "iss_label": "question\nNSFW", "title": "OUTPUT TEMPLATE --output %(title)s.%(ext)s", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I'm running yt-dlp version **2023.10.13** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\nI'm using the latest yt-dlp which states does not support website https://sfmcompile.club/. Understood.\r\nIssue: The pages appear to just be playlist of others' posts. A series of pages may take the format below:\r\n\r\n**_### LINKS ARE NSFW_**\r\nhttps://sfmcompile.club/category/overwatch/dva/page/2/\r\nhttps://sfmcompile.club/category/overwatch/dva/page/3/\r\nhttps://sfmcompile.club/category/overwatch/dva/page/4/\r\n**_### LINKS ARE NSFW_**\r\n\r\nI'm copying/pasting to a text file, the link's base, adding the page number, then the trailing slash. After having a series of these weblinks, I run yt-dlp against this text file. Each weblink contains about 8 posts per page. yt-dlp downloads the 8 posts for that page.\r\nDVA (1)\r\nDVA (2)\r\nDVA (3)\r\nDVA (4)\r\nDVA (5)\r\nDVA (6)\r\nDVA (7)\r\nDVA (8)\r\n\r\nyt-dlp then goes to the next weblink in the text file and \"reports\" the file has already been downloaded:\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nand again,\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nand again,\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nIt repeats with whatever number of weblinks in the text file until exhausted. I might be trying to download 8 weblinks multiplied by 8 posts which should be 64, but is instead only the original 8 from the first page.\r\n\r\nI understand I can add something like %(autonumber)s to the output but each of these posts in the playlists do have an actual title to them.\r\nDVA eating lunch\r\nDVA at the park\r\nDVA at work\r\n(lol)\r\n\r\nI'd prefer to use the original title of the post rather than repeating title with a follow-on count.\r\nDVA (1) 00001\r\nDVA (2) 00002\r\nDVA (3) 00003\r\nDVA (4) 00004\r\nDVA (5) 00005\r\nDVA (6) 00006\r\nDVA (7) 00007\r\nDVA (8) 00008\r\n\r\nDVA (1) 00009\r\nDVA (2) 00010\r\netc.\r\n\r\nI've experimented with using most of the OUTPUT TEMPLATE options on the yt-dlp page but can't for the life of me seem to figure out which output string is going to give me the output I desire. Most of them give me **NA**.\r\n\r\nid (string): Video identifier\r\ntitle (string): Video title\r\nfulltitle (string): Video title ignoring live timestamp and generic title\r\next (string): Video filename extension\r\nalt_title (string): A secondary title of the video\r\ndescription (string): The description of the video\r\ndisplay_id (string): An alternative identifier for the video\r\n\r\nEven tried %(original_url)s w/ no luck, thinking I could at least get the https://www.blahblahblah.com, and then afterward use a mass filename editor to edit out the unwanted https:// and .com. Nope, get an NA.\r\n\r\n**If there is a way to \"poll\" a weblink to see \"keywords\" that would be great!**\r\n\r\nIn advance, any help is appreciated.\r\n\r\nMy yt-dlp.conf\r\n\r\n```\r\n--no-download-archive\r\n--no-clean-info-json\r\n--windows-filenames\r\n--trim-filenames 140\r\n--ffmpeg-location \"..\\..\\..\\..\\ffmpeg\\bin\\ffmpeg.exe\"\r\n--audio-format \"mp3\"\r\n--format \"bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4\"\r\n--output \"D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\%(title)s.%(ext)s\"\r\n```\r\n\r\n```\r\nG:\\00OSz\\12win10b zEnt-LTSC 1809 x64\\05Apps\\Multimedia\\Video\\Installed\\yt-dlp Singles\\Support\\Folder Prep\\aX Drive Source>\"..\\..\\..\\yt-dlp.exe\" --config-location \"..\\..\\..\\yt-dlp.conf\" --batch-file \".\\aBatch URLs.txt\" --verbose\r\n[debug] Command-line config: ['--config-location', '..\\\\..\\\\..\\\\yt-dlp.conf', '--batch-file', '.\\\\aBatch URLs.txt', '--verbose']\r\n[debug] | Config \"..\\..\\..\\yt-dlp.conf\": ['--no-download-archive', '--no-clean-info-json', '--windows-filenames', '--trim-filenames', '140', '--ffmpeg-location', '..\\\\..\\\\..\\\\..\\\\ffmpeg\\\\bin\\\\ffmpeg.exe', '--audio-format', 'mp3', '--format', 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4', '--output', 'D:\\\\11Downloadz\\\\bTorrents Complete\\\\Podcasts\\\\tmp in\\\\%(title)s.%(ext)s']\r\n[debug] Batch file urls: ['https://sfmcompile.club/tag/lazyprocrastinator/page/1/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/2/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/3/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/4/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/5/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/6/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/7/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/8/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/9/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/10/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/11/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/12/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/13/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/14/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/15/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/16/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/17/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/18/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/19/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/20/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/21/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/22/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/23/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/24/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/25/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/26/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/27/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/28/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/29/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/30/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/31/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/32/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/33/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/34/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/35/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/36/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/37/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/38/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/39/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/40/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/41/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/42/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/43/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/44/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/45/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/46/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/47/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/48/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/49/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/50/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/51/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/52/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/53/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/54/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/55/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/56/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/57/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/58/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/59/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/60/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/61/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/62/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/63/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/64/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/65/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/66/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/67/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/68/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/69/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/70/']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2023.09.24.003044 [de015e930] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg N-110072-g073ec3b9da-20230325 (setts), ffprobe N-110072-g073ec3b9da-20230325\r\n[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.07.22, mutagen-1.47.0, sqlite3-3.35.5, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1886 extractors\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/page/1/\r\n[generic] 1: Downloading webpage\r\n[redirect] Following redirect to https://sfmcompile.club/tag/lazyprocrastinator/\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/\r\n[generic] lazyprocrastinator: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] lazyprocrastinator: Extracting information\r\n[debug] Looking for embeds\r\n[debug] Identified 8 html5 embeds\r\n[download] Downloading playlist: LazyProcrastinator Archives\r\n[generic] Playlist LazyProcrastinator Archives: Downloading 8 items of 8\r\n[download] Downloading item 1 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-1: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-blowjob-pov-Sound-update.mp4\"\r\n[debug] File locking is not supported. Proceeding without locking\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (1).mp4\r\n[download] 100% of 2.66MiB in 00:00:00 at 5.05MiB/s\r\n[download] Downloading item 2 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-2: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Hood-Lunafreya-spooning-fuck-Sound-update.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (2).mp4\r\n[download] 100% of 3.53MiB in 00:00:00 at 6.47MiB/s\r\n[download] Downloading item 3 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-3: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Bunny-Serah-Farron-sideway-proneboned.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (3).mp4\r\n[download] 100% of 3.09MiB in 00:00:00 at 6.05MiB/s\r\n[download] Downloading item 4 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-4: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Bunny-Serah-Farron-sideway-fucked.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (4).mp4\r\n[download] 100% of 2.97MiB in 00:00:00 at 5.50MiB/s\r\n[download] Downloading item 5 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-5: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Sadako-caught-on-tape.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (5).mp4\r\n[download] 100% of 1.77MiB in 00:00:00 at 4.34MiB/s\r\n[download] Downloading item 6 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-6: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Hood-Lunafreya-mating-press-Sound-update.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (6).mp4\r\n[download] 100% of 2.65MiB in 00:00:00 at 4.40MiB/s\r\n[download] Downloading item 7 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-7: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-hand-holding-cowgirl.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (7).mp4\r\n[download] 100% of 1.67MiB in 00:00:00 at 4.73MiB/s\r\n[download] Downloading item 8 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-8: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-dangerous-handjob-pov.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (8).mp4\r\n[download] 100% of 4.85MiB in 00:00:00 at 4.86MiB/s\r\n[download] Finished downloading playlist: LazyProcrastinator Archives\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/page/2/\r\n[generic] 2: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] 2: Extracting information\r\n[debug] Looking for embeds\r\n[debug] Identified 8 html5 embeds\r\n[download] Downloading playlist: LazyProcrastinator Archives\r\n[generic] Playlist LazyProcrastinator Archives: Downloading 8 items of 8\r\n[download] Downloading item 1 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-1: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-dangerous-thighjob-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (1).mp4 has already been downloaded\r\n[download] 100% of 2.66MiB\r\n[download] Downloading item 2 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-2: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-face-sitting-and-feetjob.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (2).mp4 has already been downloaded\r\n[download] 100% of 3.53MiB\r\n[download] Downloading item 3 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-3: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-heel-torture-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (3).mp4 has already been downloaded\r\n[download] 100% of 3.09MiB\r\n[download] Downloading item 4 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-4: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Ashley-Graham-cowgirl-riding-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (4).mp4 has already been downloaded\r\n[download] 100% of 2.97MiB\r\n[download] Downloading item 5 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-5: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Lunafreya-lifted-anal-Sound-update.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (5).mp4 has already been downloaded\r\n[download] 100% of 1.77MiB\r\n[download] Downloading item 6 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-6: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-sucking-nip-and-handjob-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (6).mp4 has already been downloaded\r\n[download] 100% of 2.65MiB\r\n[download] Downloading item 7 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-7: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-reverse-cowgirl-ride-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (7).mp4 has already been downloaded\r\n[download] 100% of 1.67MiB\r\n[download] Downloading item 8 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-8: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/2B-thighs-crushing-and-handjob.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (8).mp4 has already been downloaded\r\n```\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\r\n- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["yt-dlp.conf"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": ["yt-dlp.conf"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "a903d8285c96b2c7ac7915f228a17e84cbfe3ba4", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/1238", "iss_label": "question", "title": "[Question] How to use Sponsorblock as part of Python script", "body": "\r\n\r\n\r\n## Checklist\r\n\r\n\r\n\r\n- [x] I'm asking a question\r\n- [x] I've looked through the README\r\n- [x] I've read the opening an issue section in CONTRIBUTING.md\r\n- [x] I've searched the bugtracker for similar questions including closed ones\r\n- [x] I have given an appropriate title to the issue\r\n\r\n\r\n## Question\r\n\r\n\r\n\r\nWhat are the relevant `ydl_opts` to use Sponsorblock with yt-dlp as part of a Python script?\r\n\r\n[README.md](https://github.com/yt-dlp/yt-dlp/blob/master/README.md#sponsorblock-options) documents usage on the command line and [yt_dlp/YoutubeDL.py](https://github.com/yt-dlp/yt-dlp/blob/master/yt_dlp/YoutubeDL.py) doesn't mention Sponsorblock at all.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a903d8285c96b2c7ac7915f228a17e84cbfe3ba4', 'files': [{'path': 'yt_dlp/__init__.py', 'Loc': {\"(None, '_real_main', 62)\": {'mod': [427, 501]}}, 'status': 'modified'}, {'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "8531d2b03bac9cc746f2ee8098aaf8f115505f5b", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/10462", "iss_label": "question", "title": "Cookie not loading when downloading instagram videos", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\nI tried to download instagram videos with this code but the cookie does not load.\r\n\r\nBut with ```yt-dlp --cookies instagram_cookie.txt \"https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\"``` it does.\r\nIs there something wrong with my code? If so, please let me know the solution.\r\nSorry if I have missed something.\r\n\r\n```\r\nfrom yt_dlp import YoutubeDL\r\nimport subprocess\r\n\r\ndef download_video(url):\r\n if url in \".m3u8\":\r\n subprocess.run(f'ffmpeg -i {url} -c copy \"%name%.mp4\"', shell=True)\r\n print(\"m3u8\u30d5\u30a1\u30a4\u30eb\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u307e\u3057\u305f\")\r\n else:\r\n ydl_opts = {\r\n 'format': 'best[ext=mp4]',\r\n 'outtmpl': '%(title)s.%(ext)s',\r\n 'verbose': True,\r\n }\r\n\r\n if \"instagram.com\" in url:\r\n ydl_opts[\"cookies\"] = \"instagram_cookie.txt\"\r\n print(ydl_opts)\r\n \r\n with YoutubeDL(ydl_opts) as ydl:\r\n result = ydl.extract_info(url, download=True)\r\n file_path = ydl.prepare_filename(result)\r\n print(f\"{file_path}\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u307e\u3057\u305f\")\r\n \r\n return file_path\r\n\r\nif __name__ == \"__main__\":\r\n download_video(input(\"URL:\"))\r\n```\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\r\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n```shell\r\nURL:https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\r\n{'format': 'best[ext=mp4]', 'outtmpl': '%(title)s.%(ext)s', 'verbose': True, 'cookies': 'instagram_cookie.txt'}\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2024.07.13.232701 from yt-dlp/yt-dlp-nightly-builds [150ecc45d] (pip) API\r\n[debug] params: {'format': 'best[ext=mp4]', 'outtmpl': '%(title)s.%(ext)s', 'verbose': True, 'cookies': 'instagram_cookie.txt', 'compat_opts': set(), 'http_headers': {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.74 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-us,en;q=0.5', 'Sec-Fetch-Mode': 'navigate'}}\r\n[debug] Python 3.10.14 (CPython x86_64 64bit) - Linux-6.5.0-1023-gcp-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1834 extractors\r\n[Instagram] Extracting URL: https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\r\n[Instagram] C9SEsmYCx_M: Setting up session\r\n[Instagram] C9SEsmYCx_M: Downloading JSON metadata\r\nWARNING: [Instagram] C9SEsmYCx_M: General metadata extraction failed (some metadata might be missing).\r\n[Instagram] C9SEsmYCx_M: Downloading webpage\r\nWARNING: [Instagram] unable to extract shared data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\nWARNING: [Instagram] Main webpage is locked behind the login page. Retrying with embed webpage (some metadata might be missing).\r\n[Instagram] C9SEsmYCx_M: Downloading embed webpage\r\nWARNING: [Instagram] unable to extract additional data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\nERROR: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 740, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/instagram.py\", line 460, in _real_extract\r\n self.raise_login_required('Requested content is not available, rate-limit reached or login required')\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 1245, in raise_login_required\r\n raise ExtractorError(msg, expected=True)\r\n\r\nTraceback (most recent call last):\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1622, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1757, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 740, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/instagram.py\", line 460, in _real_extract\r\n self.raise_login_required('Requested content is not available, rate-limit reached or login required')\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 1245, in raise_login_required\r\n raise ExtractorError(msg, expected=True)\r\nyt_dlp.utils.ExtractorError: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/runner/moive-download-exe/main.py\", line 27, in \r\n download_video(input(\"URL:\"))\r\n File \"/home/runner/moive-download-exe/main.py\", line 20, in download_video\r\n result = ydl.extract_info(url, download=True)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1611, in extract_info\r\n return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1640, in wrapper\r\n self.report_error(str(e), e.format_traceback())\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1088, in report_error\r\n self.trouble(f'{self._format_err(\"ERROR:\", self.Styles.ERROR)} {message}', *args, **kwargs)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1027, in trouble\r\n raise DownloadError(message, exc_info)\r\nyt_dlp.utils.DownloadError: ERROR: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8531d2b03bac9cc746f2ee8098aaf8f115505f5b', 'files': [{'path': 'yt_dlp/YoutubeDL.py', 'Loc': {\"('YoutubeDL', None, 189)\": {'mod': [335]}}, 'status': 'modified'}, {'path': 'yt_dlp/__init__.py', 'Loc': {\"(None, 'parse_options', 737)\": {'mod': [901]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py", "yt_dlp/YoutubeDL.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "e59c82a74cda5139eb3928c75b0bd45484dbe7f0", "is_iss": 0, "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/11152", "iss_label": "question", "title": "How to use --merge-output-format?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nHello,\r\n\r\nThis is the first time I'm trying to use the \"--merge-output-format\" option to download and merge a video stream with an audio stream\u2026 and it failed:\r\n\r\n```\r\nyoutube-dlp.exe -qF\r\nyoutube-dlp.exe -f '160+140' --merge-output-format mp4 https://www.youtube.com/watch?v=123ABC\r\nRequested format is not available. Use --list-formats for a list of available formats\r\n```\r\n\r\nWhat is the right way to use that switch?\r\n\r\nThank you.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU `)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg 2024-06-13-git-0060a368b1-essentials_build-www.gyan.dev (setts), ffprobe 2024-06-13-git-0060a368b1-essentials_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1830 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\n[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec\r\n[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.09.27/SHA2-256SUMS\r\nCurrent version: stable@2024.08.06 from yt-dlp/yt-dlp\r\nLatest version: stable@2024.09.27 from yt-dlp/yt-dlp\r\nCurrent Build Hash: 468a6f8bf1d156ad173e000a40f696d4fbd69c5aa7360229329b9063a388e7d0\r\nUpdating to stable@2024.09.27 from yt-dlp/yt-dlp ...\r\n[debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp/releases/download/2024.09.27/yt-dlp.exe\r\nUpdated yt-dlp to stable@2024.09.27 from yt-dlp/yt-dlp\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e59c82a74cda5139eb3928c75b0bd45484dbe7f0', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 1430)': {'mod': [1430]}}, 'status': 'modified'}, {'path': 'yt_dlp/options.py', 'Loc': {\"(None, 'create_parser', 219)\": {'mod': [786, 790]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/options.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f18ebbd31645437afaa9738fcf2b5ed8b48cb021", "is_iss": 1, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/6177", "iss_label": "Feature", "title": "Workflow that can follow different paths and skip some of them.", "body": "### Feature Idea\n\nHi.\r\nI am very interested in the ability to create a workflow that can follow different paths and skip some if they are not needed.\r\n\r\nFor example, I want to create an image and save it under a fixed name (unique). But tomorrow (or after restart) I want to run this workflow again and work with the already created image, which I created and saved earlier, and not waste time on its creation (upscale, modification, etc.), but just check if this image is in my folder, and if it is, then just load it and work with the loaded image, and the branch that creates the image will not run at all (skip this branch).\r\nBut it's important that the script does this by itself (without MUTE or BYPASS).\r\n\r\nExample \r\n![Screenshot_1](https://github.com/user-attachments/assets/840c86a0-7944-49ca-95fd-15825a632c7f)\r\n\r\nThis will help save a lot of time on complex workflows that need to be improved or modernized. And it can also save resources in case of a break or lack of memory - it will be possible to skip large parts of the scheme if they have already been made and saved (without keeping in memory models that have already worked).\r\n\n\n### Existing Solutions\n\nI've been trying for a long time to find out if such a possibility exists, but I couldn't find it. If such a feature is already implemented, where can I find it? Thanks.\n\n### Other\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ltdrdata", "pro": "ComfyUI-extension-tutorials", "path": ["ComfyUI-Impact-Pack/tutorial/switch.md"]}], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["ComfyUI-Impact-Pack/tutorial/switch.md"], "test": [], "config": [], "asset": []}} -{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "834ab278d2761c452f8e76c83fb62d8f8ce39301", "is_iss": 0, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/1064", "iss_label": "", "title": "Error occurred when executing CLIPVisionEncode", "body": "Hi there, \r\nsomehow i cant get unCLIP to work \r\n\r\nThe .png has the unclip example workflow i tried out, but it gets stuck in the CLIPVisionEncode Module.\r\nWhat can i do to solve this? \r\n\r\nError occurred when executing CLIPVisionEncode:\r\n\r\n'NoneType' object has no attribute 'encode_image'\r\n\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 144, in recursive_execute\r\noutput_data, output_ui = get_output_data(obj, input_data_all)\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 74, in get_output_data\r\nreturn_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 67, in map_node_over_list\r\nresults.append(getattr(obj, func)(**slice_dict(input_data_all, i)))\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 742, in encode\r\noutput = clip_vision.encode_image(image)\r\n\r\n\r\n\r\n![unclip_2pass](https://github.com/comfyanonymous/ComfyUI/assets/141161676/51b5ed7c-d5d9-4b88-a973-a54882039653)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '834ab278d2761c452f8e76c83fb62d8f8ce39301', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 30)': {'mod': [30]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "model\n+\nDoc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "3c60ecd7a83da43d694e26a77ca6b93106891251", "is_iss": 1, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/5229", "iss_label": "User Support", "title": "Problem with ComfyUI workflow \"ControlNetApplySD3 'NoneType' object has no attribute 'copy'\"", "body": "### Your question\n\nI get the following error when running the workflow\r\n\r\nI leave a video of what I am working on as a reference.\r\n\r\nhttps://www.youtube.com/watch?v=MbQv8zoNEfY\r\n\r\nvideo of reference\n\n### Logs\n\n```powershell\n# ComfyUI Error Report\r\n## Error Details\r\n- **Node Type:** ControlNetApplySD3\r\n- **Exception Type:** AttributeError\r\n- **Exception Message:** 'NoneType' object has no attribute 'copy'\r\n## Stack Trace\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\n\r\n```\r\n## System Information\r\n- **ComfyUI Version:** v0.2.3-3-g6632365\r\n- **Arguments:** ComfyUI\\main.py --windows-standalone-build\r\n- **OS:** nt\r\n- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]\r\n- **Embedded Python:** true\r\n- **PyTorch Version:** 2.4.1+cu124\r\n## Devices\r\n\r\n- **Name:** cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n - **Type:** cuda\r\n - **VRAM Total:** 25769148416\r\n - **VRAM Free:** 19327837688\r\n - **Torch VRAM Total:** 5100273664\r\n - **Torch VRAM Free:** 57107960\r\n\r\n## Logs\r\n```\r\n2024-10-12 11:47:24,318 - root - INFO - Total VRAM 24575 MB, total RAM 65461 MB\r\n2024-10-12 11:47:24,318 - root - INFO - pytorch version: 2.4.1+cu124\r\n2024-10-12 11:47:24,318 - root - INFO - Set vram state to: NORMAL_VRAM\r\n2024-10-12 11:47:24,318 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n2024-10-12 11:47:26,738 - root - INFO - Using pytorch cross attention\r\n2024-10-12 11:47:32,778 - root - INFO - [Prompt Server] web root: D:\\ComfyUI_windows_portable\\ComfyUI\\web\r\n2024-10-12 11:47:36,818 - root - INFO - Total VRAM 24575 MB, total RAM 65461 MB\r\n2024-10-12 11:47:36,818 - root - INFO - pytorch version: 2.4.1+cu124\r\n2024-10-12 11:47:36,818 - root - INFO - Set vram state to: NORMAL_VRAM\r\n2024-10-12 11:47:36,818 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n2024-10-12 11:47:37,468 - root - INFO - \r\nImport times for custom nodes:\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\websocket_image_save.py\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\cg-use-everywhere\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI_UltimateSDUpscale\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\rgthree-comfy\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-KJNodes\r\n2024-10-12 11:47:37,468 - root - INFO - 0.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI_essentials\r\n2024-10-12 11:47:37,468 - root - INFO - 0.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\r\n2024-10-12 11:47:37,468 - root - INFO - 0.3 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-eesahesNodes\r\n2024-10-12 11:47:37,468 - root - INFO - 0.4 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Manager\r\n2024-10-12 11:47:37,468 - root - INFO - 0.4 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Impact-Pack\r\n2024-10-12 11:47:37,468 - root - INFO - 1.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-AdvancedLivePortrait\r\n2024-10-12 11:47:37,468 - root - INFO - \r\n2024-10-12 11:47:37,478 - root - INFO - Starting server\r\n\r\n2024-10-12 11:47:37,478 - root - INFO - To see the GUI go to: http://127.0.0.1:8188\r\n2024-10-12 12:16:10,093 - root - INFO - got prompt\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 147:\r\n2024-10-12 12:16:10,103 - root - ERROR - * UpscaleModelLoader 83:\r\n2024-10-12 12:16:10,103 - root - ERROR - - Value not in list: model_name: '4x-ClearRealityV1.pth' not in ['ClearRealityV1\\\\4x-ClearRealityV1.pth', 'ClearRealityV1\\\\4x-ClearRealityV1_Soft.pth', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp32.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp32.bin']\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 321:\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 311:\r\n2024-10-12 12:16:10,103 - root - ERROR - * InstantX Flux Union ControlNet Loader 334:\r\n2024-10-12 12:16:10,103 - root - ERROR - - Value not in list: control_net_name: 'flux\\InstantX_flux.safetensors' not in ['flux-canny-controlnet-v3.safetensors', 'flux-canny-controlnet.safetensors', 'flux-canny-controlnet_v2.safetensors', 'flux-depth-controlnet-v3.safetensors', 'flux-depth-controlnet.safetensors', 'flux-depth-controlnet_v2.safetensors', 'flux-hed-controlnet-v3.safetensors', 'flux-hed-controlnet.safetensors', 'flux\\\\diffusion_pytorch_model.safetensors']\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 301:\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 140:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 320:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 145:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 319:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 179:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 84:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 258:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 299:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 138:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 146:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 322:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 317:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 323:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 316:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 300:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 87:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 318:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 141:\r\n2024-10-12 12:16:10,118 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,647 - root - INFO - Using pytorch attention in VAE\r\n2024-10-12 12:16:10,647 - root - INFO - Using pytorch attention in VAE\r\n2024-10-12 12:16:18,202 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16\r\n2024-10-12 12:16:18,202 - root - INFO - model_type FLUX\r\n2024-10-12 12:27:11,335 - root - ERROR - error could not detect control model type.\r\n2024-10-12 12:27:11,335 - root - ERROR - error checkpoint does not contain controlnet or t2i adapter data D:\\ComfyUI_windows_portable\\ComfyUI\\models\\controlnet\\flux\\diffusion_pytorch_model.safetensors\r\n2024-10-12 12:27:13,290 - root - INFO - Requested to load FluxClipModel_\r\n2024-10-12 12:27:13,294 - root - INFO - Loading 1 new model\r\n2024-10-12 12:27:13,301 - root - INFO - loaded completely 0.0 4777.53759765625 True\r\n2024-10-12 12:27:51,099 - root - WARNING - clip missing: ['text_projection.weight']\r\n2024-10-12 12:27:52,730 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy'\r\n2024-10-12 12:27:52,745 - root - ERROR - Traceback (most recent call last):\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\nAttributeError: 'NoneType' object has no attribute 'copy'\r\n\r\n2024-10-12 12:27:52,750 - root - INFO - Prompt executed in 702.63 seconds\r\n2024-10-12 12:44:26,904 - root - INFO - got prompt\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 147:\r\n2024-10-12 12:44:26,917 - root - ERROR - * UpscaleModelLoader 83:\r\n2024-10-12 12:44:26,917 - root - ERROR - - Value not in list: model_name: '4x-ClearRealityV1.pth' not in ['ClearRealityV1\\\\4x-ClearRealityV1.pth', 'ClearRealityV1\\\\4x-ClearRealityV1_Soft.pth', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp32.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp32.bin']\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 321:\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 311:\r\n2024-10-12 12:44:26,917 - root - ERROR - * InstantX Flux Union ControlNet Loader 334:\r\n2024-10-12 12:44:26,917 - root - ERROR - - Value not in list: control_net_name: 'flux\\InstantX_flux.safetensors' not in ['flux-canny-controlnet-v3.safetensors', 'flux-canny-controlnet.safetensors', 'flux-canny-controlnet_v2.safetensors', 'flux-depth-controlnet-v3.safetensors', 'flux-depth-controlnet.safetensors', 'flux-depth-controlnet_v2.safetensors', 'flux-hed-controlnet-v3.safetensors', 'flux-hed-controlnet.safetensors', 'flux\\\\diffusion_pytorch_model.safetensors']\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 301:\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 140:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 320:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 145:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 319:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 179:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 84:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 258:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 299:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 138:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 146:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 322:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 317:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 323:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 316:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 300:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 87:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 318:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,932 - root - ERROR - Failed to validate prompt for output 141:\r\n2024-10-12 12:44:26,932 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,992 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy'\r\n2024-10-12 12:44:26,992 - root - ERROR - Traceback (most recent call last):\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\nAttributeError: 'NoneType' object has no attribute 'copy'\r\n\r\n2024-10-12 12:44:26,997 - root - INFO - Prompt executed in 0.06 seconds\r\n```\r\n## Attached Workflow\r\nPlease make sure that workflow does not contain any sensitive information such as API keys or passwords.\r\n```\r\nWorkflow too large. Please manually upload the workflow from local file system.\r\n```\r\n\r\n## Additional Context\r\n(Please add any additional context or steps to reproduce the error here)\n```\n\n\n### Other\n\n![Screenshot 2024-10-12 at 12-30-54 ComfyUI](https://github.com/user-attachments/assets/f0f76743-0561-4c02-8915-43143904b5b3)\r\n![Screenshot 2024-10-12 at 12-29-58 ComfyUI](https://github.com/user-attachments/assets/91e3539f-aa8b-4e68-bd58-4c4894345ce3)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "Shakker-Labs", "pro": "FLUX.1-dev-ControlNet-Union-Pro"}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["FLUX.1-dev-ControlNet-Union-Pro"]}} -{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "494cfe5444598f22eced91b6f4bfffc24c4af339", "is_iss": 0, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/96", "iss_label": "", "title": "Feature Request: model and output path setting", "body": "Sym linking is not ideal, setting a model folder is pretty standard these days and most of us use more than one software that uses models. \r\nThe same for choosing where to put the output images, personally mine go to a portable drive, not sure how to do that with ComfyUI.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '494cfe5444598f22eced91b6f4bfffc24c4af339', 'files': [{'path': 'extra_model_paths.yaml.example', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["extra_model_paths.yaml.example"], "asset": []}} -{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f18ebbd31645437afaa9738fcf2b5ed8b48cb021", "is_iss": 1, "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/6186", "iss_label": "User Support\nCustom Nodes Bug", "title": "error", "body": "### Your question\n\n[Errno 2] No such file or directory: 'D:\\\\ComfyUI_windows_portable_nvidia\\\\ComfyUI_windows_portable\\\\ComfyUI\\\\custom_nodes\\\\comfyui_controlnet_aux\\\\ckpts\\\\LiheYoung\\\\Depth-Anything\\\\.cache\\\\huggingface\\\\download\\\\checkpoints\\\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete' is:issue \n\n### Logs\n\n```powershell\n.\n```\n\n\n### Other\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".cache"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [".cache"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "e9df345a7853c52bfe98830bd2c9a07aaa7b81d9", "is_iss": 0, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/159", "iss_label": "", "title": "Raspberry Pi Memory Error", "body": "* face_recognition version: 02.1\r\n* Python version: 2.7\r\n* Operating System: Raspian\r\n\r\n### Description\r\n\r\nI installed to face_recognition my raspberry pi successfully for python 3. Now i am trying to install for Python2 because i need it. When i was trying install i am taking a Memory Error. I attached the images from my error. Please help me \r\n\r\n![20170821_190454](https://user-images.githubusercontent.com/23421095/29530146-1e98e7be-86ab-11e7-91ea-e17c02170f63.jpg)\r\n![20170821_190501](https://user-images.githubusercontent.com/23421095/29530148-2113ac22-86ab-11e7-934d-e2062359f51a.jpg)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e9df345a7853c52bfe98830bd2c9a07aaa7b81d9', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "0961fd1aaf97336e544421318fcd4b55feeb1a79", "is_iss": 1, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/533", "iss_label": "", "title": "knn neighbors name list?", "body": "In **face_recognition_knn.py**\r\nI want name list of 5 neighbors. So I change n_neighbors=5.\r\n`closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=5)`\r\nAnd it returned just 5 values of **distance_threshold** from trained .clf file\r\n\r\nI found that `knn_clf.predict(faces_encodings)` return only 1 best match name.\r\n\r\nHow can I get the name list of all that 5 people?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "scikit-learn"}, {"pro": "scikit-learn", "path": ["sklearn/neighbors/_classification.py"]}], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["sklearn/neighbors/_classification.py"], "doc": [], "test": [], "config": [], "asset": ["scikit-learn"]}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "f21631401119e4af2e919dd662c3817b2c480c75", "is_iss": 0, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/135", "iss_label": "", "title": "face_recognization with python", "body": "* face_recognition version:\r\n* Python version:3.5\r\n* Operating System:windows\r\n\r\n### Description\r\n\r\n I am working with some python face reorganization code in which I want to compare sampleface.jpg which contains a sample face with facegrid.jpg. The facegrid.jpg itself has some 6 faces in it. I am getting results as true for every face while I should be getting only one. The code is below. \r\n\r\n```python\r\nimport face_recognition\r\nimage = face_recognition.load_image_file(\"faceGrid.jpg\")\r\nsample_image = face_recognition.load_image_file(\"sampleface.jpg\")\r\n\r\nsample_face_encoding = face_recognition.face_encodings(sample_image)\r\n\r\nface_locations = face_recognition.face_locations(image)\r\n\r\nprint (len(face_locations), \" Faces\")\r\n\r\nfor face_location in face_locations:\r\n top, right, bottom, left = face_location\r\n face_image = image[top:bottom, left:right]\r\n face_encodings = face_recognition.face_encodings(face_image)[0]\r\n if face_recognition.compare_faces(sample_face_encoding,face_encodings)[0]:\r\n print (\"Found!\")\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f21631401119e4af2e919dd662c3817b2c480c75', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "cea177b75f74fe4e8ce73cf33da2e7e38e658ba4", "is_iss": 1, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/726", "iss_label": "", "title": "cv2.imshow error", "body": "Hi All,\r\n\r\nwith the help of docs i am trying to display image with below code and getting error. i tried all possible ways like file extension, path and python version to resolve this error and not able to rectify. So, please do needful,\r\n\r\nNote:- 1.image present in the path. \r\n 2. print statement result None as output.\r\n 3. i am using python 3.6 & opencv-python-4.0.0.21\r\n\r\nimport numpy\r\nimport cv2\r\n\r\nimg = cv2.imread('C:\\\\Users\\\\Public\\\\Pictures\\\\Sample Pictures\\\\Penguins.jpeg',0) # to read an image\r\n\r\ncv2.imshow('image',img) # to display image\r\ncv2.waitKey(0)\r\ncv2.destroyAllWindows()\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/rrmamidi/Desktop/old Desktop/compress_1/python/basic python scripts/about camera_opencv_cv2/about_img_read.py\", line 11, in \r\n cv2.imshow('image',img) # to display image\r\ncv2.error: OpenCV(4.0.0) C:\\projects\\opencv-python\\opencv\\modules\\highgui\\src\\window.cpp:350: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'\r\n\r\nThanks,\r\nRaja", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [12], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "b8fed6f3c0ad5ab2dab72d6251c60843cad71386", "is_iss": 0, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/643", "iss_label": "", "title": "Train model with more than 1 image per person", "body": "* face_recognition version: 1.2.3\r\n* Python version: 2.7.15\r\n* Operating System: Windows 10\r\n\r\n### Description\r\n\r\nI Would like to train the model with more than 1 image per each person to achieve better recognition results. Is it possible?\r\n\r\nOne more question is what does [0] mean here:\r\n```\r\nknown_face_encoding_user = face_recognition.face_encodings('image.jpg')[0]\r\n```\r\nIf I put [1] here I receive \"IndexError: list index out of range\" error.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b8fed6f3c0ad5ab2dab72d6251c60843cad71386', 'files': [{'path': 'examples/face_recognition_knn.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["examples/face_recognition_knn.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "aff06e965e895d8a6e781710e7c44c894e3011a3", "is_iss": 0, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/68", "iss_label": "", "title": "cv2.error: /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp:3229: error: (-215) ssize.area() > 0 in function resize", "body": "* face_recognition version:\r\n* Python version: 3.4\r\n* Operating System: Jesse Raspbian\r\n\r\n### Description\r\n\r\nWhenever I try to run facerec_from_webcam_faster.py, I get the error below. Note that I have checked my local files, the image to be recognized is place appropriately. \r\n\r\n### \r\n\r\n\r\n```\r\nOpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229\r\nTraceback (most recent call last):\r\n File \"pj_webcam.py\", line 31, in \r\n small_frame = cv2.resize(frame, (1, 1), fx=0.01, fy=0.01)\r\ncv2.error: /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp:3229: error: (-215) ssize.area() > 0 in function resize\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'aff06e965e895d8a6e781710e7c44c894e3011a3', 'files': [{'path': 'examples/facerec_from_webcam_faster.py', 'Loc': {'(None, None, None)': {'mod': [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/facerec_from_webcam_faster.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "6da4a2ff0f0183280cdc2bffa58ddae8bf93ac41", "is_iss": 0, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/181", "iss_label": "", "title": "does load_image_file have a version which read from byte[] not just from the disk file", "body": "does load_image_file have a version which read from byte array in memory not just from the disk file.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6da4a2ff0f0183280cdc2bffa58ddae8bf93ac41', 'files': [{'path': 'face_recognition/api.py', 'Loc': {\"(None, 'load_image_file', 73)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["face_recognition/api.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "5f804870c14803c2664942c958f11112276a79cc", "is_iss": 0, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/209", "iss_label": "", "title": "face_locations get wrong result but dlib is correct", "body": "* face_recognition version: 1.0.0\r\n* Python version: 3.5\r\n* Operating System: Ubuntu 16.04 LTS\r\n\r\n### Description\r\nI run the example find_faces_in_picture_cnn.py to process the image from this link.\r\nhttps://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1507896274082&di=824f7f59943a71e2e9904d22175ce92c&imgtype=0&src=http%3A%2F%2Fwww.moontalk.com.tw%2Fupload%2Fimages%2F20160606angelina-03.jpg\r\nThe program detect the hand as a face ,I check the code and run example in dlib from this link ,the result is correct.\r\nhttp://dlib.net/cnn_face_detector.py.html\r\nSo the problem maybe occur in load_image_file ?\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5f804870c14803c2664942c958f11112276a79cc', 'files': [{'path': 'examples/find_faces_in_picture_cnn.py', 'Loc': {'(None, None, None)': {'mod': [12]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/find_faces_in_picture_cnn.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "a96484edc270697c666c1c32ba5163cf8e71b467", "is_iss": 0, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/1004", "iss_label": "", "title": "IndexError: list index out of range while attempting to automatically recognize faces ", "body": "* face_recognition version: 1.2.3\r\n* Python version: 3.7.3\r\n* Operating System: Windows 10 x64\r\n\r\n### Description\r\n\r\nHello everyone,\r\nI was attempting to modify facerec_from_video_file.py in order to make it save the unknown faces in the video and recognize them based on the first frame they appear on for example if an unknown face appears on the frame 14 it should be recognized as \"new 14\" but i keep getting the error \"IndexError: list index out of range\" when a new face appears.\r\nSo here is my code and the traceback\r\n\r\n### What I Did\r\n\r\n```\r\nimport face_recognition\r\nimport cv2\r\n\r\ninput_movie = cv2.VideoCapture(\"video.mp4\")\r\nlength = int(input_movie.get(cv2.CAP_PROP_FRAME_COUNT))\r\n\r\n# Create an output movie file (make sure resolution/frame rate matches input video!)\r\nfourcc = cv2.VideoWriter_fourcc(*'XVID')\r\noutput_movie = cv2.VideoWriter('output.avi', fourcc, 29.97, (640, 360))\r\n\r\n\r\nnewimage = face_recognition.load_image_file(\"anchor.png\")\r\nnew_face_encoding = face_recognition.face_encodings(newimage)[0]\r\n\r\nknown_faces = [\r\n new_face_encoding,\r\n \r\n]\r\n\r\n# Initialize some variables\r\nface_locations = []\r\nface_encodings = []\r\nface_names = []\r\nframe_number = 0\r\n\r\n\r\ndef recog(frame_number, known_faces, face_names):\r\n toenc = []\r\n \r\n torec = face_recognition.load_image_file(r\"New\\Unknown%s.jpg\" %str(frame_number))\r\n \r\n #if not len(torec):\r\n # print(\"cannot find image\")\r\n #torec = face_recognition.load_image_file(r\"New\\Unknown%s.jpg\" %str(frame_number))\r\n toenc.append((face_recognition.face_encodings(torec))[0])\r\n if not len(toenc):\r\n print(\"can't be encoded\")\r\n known_faces.append(toenc.pop())\r\n face_names.append(\"new %s\" %str(frame_number)) \r\n \r\n# Load some sample pictures and learn how to recognize them.\r\n\r\nwhile True:\r\n # Grab a single frame of video\r\n ret, frame = input_movie.read()\r\n frame_number += 1\r\n\r\n # Quit when the input video file ends\r\n if not ret:\r\n break\r\n\r\n # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)\r\n rgb_frame = frame[:, :, ::-1]\r\n\r\n # Find all the faces and face encodings in the current frame of video\r\n face_locations = face_recognition.face_locations(rgb_frame)\r\n face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)\r\n\r\n #face_names = []\r\n for face_encoding in face_encodings:\r\n # See if the face is a match for the known face(s)\r\n match = face_recognition.compare_faces(known_faces, face_encoding)\r\n \r\n \r\n # If you had more than 2 faces, you could make this logic a lot prettier\r\n # but I kept it simple for the demo\r\n name = \"Unknown\"\r\n \r\n face_names.append(name)\r\n\r\n # Label the results\r\n for (top, right, bottom, left), name in zip(face_locations, face_names):\r\n if not name:\r\n continue\r\n\r\n # Draw a box around the face\r\n unface = cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)\r\n if name == \"Unknown\":\r\n res = frame[top:bottom, left:right]\r\n cv2.imwrite(r\"New\\Unknown%s.jpg\" %str(frame_number), res)\r\n recog(frame_number, known_faces, face_names)\r\n \r\n cv2.rectangle(frame, (left, bottom - 25), (right, bottom), (0, 0, 255), cv2.FILLED)\r\n font = cv2.FONT_HERSHEY_DUPLEX\r\n cv2.putText(frame, name, (left + 6, bottom - 6), font, 0.5, (255, 255, 255), 1)\r\n \r\n # Write the resulting image to the output video file\r\n print(\"Processing frame {} / {}\".format(frame_number, length))\r\n #output_movie.write(frame)\r\n cv2.imshow(\"frame\", frame)\r\n if( cv2.waitKey(27) & 0xFF == ord('q')):\r\n break\r\n\r\n# All done!\r\ninput_movie.release()\r\ncv2.destroyAllWindows()\r\n\r\n```\r\n### Output\r\n```\r\nIn [1]: runfile('D:/project_new/facerec_from_video_file.py', wdir='D:/project_new')\r\nProcessing frame 1 / 3291\r\nProcessing frame 2 / 3291\r\nProcessing frame 3 / 3291\r\nProcessing frame 4 / 3291\r\nProcessing frame 5 / 3291\r\nProcessing frame 6 / 3291\r\nProcessing frame 7 / 3291\r\nProcessing frame 8 / 3291\r\nProcessing frame 9 / 3291\r\nProcessing frame 10 / 3291\r\nProcessing frame 11 / 3291\r\nProcessing frame 12 / 3291\r\nTraceback (most recent call last):\r\n\r\n File \"\", line 1, in \r\n runfile('D:/project_new/facerec_from_video_file.py', wdir='D:/project_new')\r\n\r\n File \"C:\\Users\\saber\\Anaconda3\\lib\\site-packages\\spyder_kernels\\customize\\spydercustomize.py\", line 827, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"C:\\Users\\saber\\Anaconda3\\lib\\site-packages\\spyder_kernels\\customize\\spydercustomize.py\", line 110, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"D:/project_new/facerec_from_video_file.py\", line 81, in \r\n recog(frame_number, known_faces, face_names)\r\n\r\n File \"D:/project_new/facerec_from_video_file.py\", line 35, in recog\r\n toenc.append((face_recognition.face_encodings(torec))[0])\r\n\r\nIndexError: list index out of range\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a96484edc270697c666c1c32ba5163cf8e71b467', 'files': [{'path': 'examples/facerec_from_video_file.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["examples/facerec_from_video_file.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "a8830627e89bcfb9c9dda2c8f7cac5d2e5cfb6c0", "is_iss": 1, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/178", "iss_label": "", "title": "IndexError: list index out of range", "body": "IndexError: list index out of range\r\n\r\nmy code:\r\n\r\nimport face_recognition\r\nknown_image = face_recognition.load_image_file(\"D:/1.jpg\")\r\nunknown_image = face_recognition.load_image_file(\"D:/2.jpg\")\r\nbiden_encoding = face_recognition.face_encodings(known_image)[0]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [8], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "7f183afd9c848f05830c145890c04181dcc1c46b", "is_iss": 1, "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/93", "iss_label": "", "title": "how to do live face recognition with RPi", "body": "* Operating System: Debian\r\n\r\n### Description\r\n\r\ni want to use the example ```facerec_from_webcam_faster.py``` \r\nbut i don't know how to change the video_output source to the PiCam\r\n\r\n### What I Did\r\n\r\n```\r\ncamera = picamera.PiCamera()\r\ncamera.resolution = (320, 240)\r\noutput = np.empty((240, 320, 3), dtype=np.uint8)\r\n\r\n\r\nwhile True:\r\n # Grab a single frame of video\r\n ret, frame = camera.capture(output, format=\"rgb\")\r\n```\r\nbut i got erros, so how can i use the picam as source?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [18], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "14318e392fbe2d69511441edf5a172c4c72d6961", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/7095", "iss_label": "status/close", "title": "\u6587\u672c\u68c0\u6d4b\u5b8c\u7684\u56fe\u7247\u600e\u4e48\u8fdb\u884c\u6587\u672c\u8bc6\u522b\u554a\uff1f", "body": "\u662f\u8981\u628a\u8fb9\u754c\u6846\u6846\u51fa\u7684\u56fe\u7247\u526a\u88c1\u4e0b\u6765\uff0c\u9001\u8fdb\u8bc6\u522b\u6a21\u578b\u5417\uff1f\u5173\u4e8e\u8fd9\u4e2a\u7684\u4ee3\u7801\u5728\u54ea\u91cc\u554a", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '14318e392fbe2d69511441edf5a172c4c72d6961', 'files': [{'path': 'tools/infer/predict_system.py', 'Loc': {\"('TextSystem', '__call__', 67)\": {'mod': [69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/infer/predict_system.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "db60893201ad07a8c20d938a8224799f932779ad", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5641", "iss_label": "inference and deployment", "title": "PaddleServing\u600e\u6837\u4fee\u6539\u76f8\u5173\u53c2\u6570", "body": "\u6839\u636e [**\u57fa\u4e8ePaddleServing\u7684\u670d\u52a1\u90e8\u7f72**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/deploy/pdserving/README_CN.md) \u540e\uff0c\u600e\u6837\u5bf9\u6a21\u578b\u53ca\u670d\u52a1\u7684\u4e00\u4e9b\u53c2\u6570\u8fdb\u884c\u4fee\u6539\u5462\uff1f\r\n\u4f8b\u5982\u5982\u4e0b\u53c2\u6570\uff1a\r\nuse_tensorrt\r\nbatch_size\r\ndet_limit_side_len\r\nbatch_num\r\ntotal_process_num\r\n...\r\n\r\n\u7591\u60d1\uff1a\r\n1\u3001[**PaddleHub Serving\u90e8\u7f72**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/deploy/hubserving/readme.md)\uff0c\u652f\u6301\u4e00\u4e9b\u53c2\u6570\u4fee\u6539\r\n2\u3001[**\u57fa\u4e8ePython\u5f15\u64ce\u7684PP-OCR\u6a21\u578b\u5e93\u63a8\u7406**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/doc/doc_ch/inference_ppocr.md)\uff0c\u4e5f\u652f\u6301\u53c2\u6570\u4fee\u6539\r\n\r\n\u4e0a\u9762\u5217\u4e3e\u7684\u51e0\u4e2a\u53c2\u6570\u90fd\u6781\u5176\u91cd\u8981\uff0c\u4f46\u662fPaddleServing\u65b9\u6cd5\u5374\u4e0d\u652f\u6301\uff0c\u8bf7\u6307\u793a\uff01\u662f\u5426\u662f\u54ea\u91cc\u53ef\u4ee5\u8bbe\u7f6e\u800c\u6211\u6ca1\u627e\u5230", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'db60893201ad07a8c20d938a8224799f932779ad', 'files': [{'path': 'deploy/pdserving/web_service.py', 'Loc': {\"('DetOp', 'init_op', 30)\": {'mod': [31]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["deploy/pdserving/web_service.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "0afe6c3262babda2012074110520fe9d1a3c63c0", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/2405", "iss_label": "status/close", "title": "\u8f7b\u91cf\u6a21\u578b\u7684\u63a8\u65ad\u4e2d\uff0c\u6bcf\u9694\u51e0\u884c\u5c31\u4f1a\u51fa\u73b0\u4e00\u884c\u8bc6\u522b\u4e3a\u4e71\u7801", "body": "![image](https://user-images.githubusercontent.com/62594309/113710560-73095c00-9716-11eb-828d-40026f37715e.png)\r\n\u5c31\u50cf\u8fd9\u91cc\u84dd\u8272\u5708\u8d77\u6765\u7684\u8fd9\u884c\r\n\r\n\u4f46\u662f\u901a\u7528\u6a21\u578b\u5c31\u6ca1\u6709\u8fd9\u4e2a\u95ee\u9898\r\n\u8fd9\u662f\u4ec0\u4e48\u539f\u56e0\u5f15\u8d77\u7684\u5462\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0afe6c3262babda2012074110520fe9d1a3c63c0', 'files': [{'path': 'deploy/hubserving/readme_en.md', 'Loc': {'(None, None, 192)': {'mod': [192]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code\nDoc\nHow to modify own code"}, "loctype": {"code": [], "doc": ["deploy/hubserving/readme_en.md"], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "64edd41c277c60c672388be6d5764be85c1de43a", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5427", "iss_label": "status/close\nstale", "title": "rknn\u4e0d\u652f\u6301DepthwiseSeparable\u6a21\u5757\u4e2d\u7684ConvBNLayer\u5c42\u53c2\u6570stride(p1, p2) p1\u4e0ep2\u4e0d\u4e00\u81f4\u7b97\u5b50", "body": "rknn\u4e0d\u652f\u6301DepthwiseSeparable\u6a21\u5757\u4e2d\u7684ConvBNLayer\u5c42\u53c2\u6570stride(p1, p2) p1\u4e0ep2\u4e0d\u4e00\u81f4\u7b97\u5b50\uff0c\u8fd9\u6837\u6d89\u53ca\u5230\u4fee\u6539\u7f51\u7edc\u7ed3\u6784\uff0c\u6211\u770b\u4e86\u4e0bstride(p1, p2)\u4e2dp1\u4e0ep2\u4e0d\u4e00\u81f4\u7684\u60c5\u51b5\u662f\u4e3a\u4e86\u505a\u4e0b\u91c7\u6837\u7684\u64cd\u4f5c\uff0c\u8bf7\u95ee\u6211\u60f3\u4fdd\u6301p1\u4e0ep2\u76f8\u7b49\u7684\u60c5\u51b5\u4e0b\uff0c\u8be5\u5982\u4f55\u4fee\u6539DepthwiseSeparable\u6a21\u5757\u6216\u8005\u66f4\u4e0a\u5c42\u6a21\u5757\u7684\u53c2\u6570\u5462\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '64edd41c277c60c672388be6d5764be85c1de43a', 'files': [{'path': 'ppocr/modeling/backbones/rec_mobilenet_v3.py', 'Loc': {\"('MobileNetV3', '__init__', 23)\": {'mod': [48]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ppocr/modeling/backbones/rec_mobilenet_v3.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "2e352dcc06ba86159099ec6a2928c7ce556a7245", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/7542", "iss_label": "status/close", "title": "PaddleOCR\u52a0\u8f7d\u81ea\u5df1\u7684\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u56fe\u50cf\u68c0\u6d4b+\u8bc6\u522b\u4e0e\u4ec5\u4f7f\u7528\u8bc6\u522b\u6a21\u578b\u65f6\u6548\u679c\u4e0d\u4e00\u81f4", "body": "\u5148\u7528PaddleOCR\u7684\u56fe\u50cf\u68c0\u6d4b\u529f\u80fd\uff0c\u6309\u7167\u5f97\u5230\u7684\u8bc6\u522b\u6846\u5e26\u6587\u5b57\u7684\u5c0f\u56fe\u88c1\u526a\u51fa\u6765\uff0c\u6807\u6ce8\u7528\u505a\u8bad\u7ec3\u96c6\uff0c\u5bf9\u6587\u5b57\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u4e86\u8bad\u7ec3\uff0c\u7136\u540e\u63a8\u7406\u6d4b\u8bd5\u4e86\u4e00\u4e0b\u6ca1\u6709\u95ee\u9898\uff0c\u4e8e\u662f\u4f7f\u7528PaddleOCR\u52a0\u8f7d\u65b0\u8bad\u7ec3\u7684\u6587\u5b57\u8bc6\u522b\u6a21\u578b\u8dd1\u68c0\u6d4b + \u8bc6\u522b\u7684\u6574\u4f53\u6d41\u7a0b\uff0c\u7ed3\u679c\u53d1\u73b0\u51fa\u73b0\u4e86\u8bc6\u522b\u7ed3\u679c\u4e0d\u4e00\u81f4\u7684\u60c5\u51b5\u3002\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aCenOS7\r\n- \u7248\u672c\u53f7/Version\uff1aPaddle\uff1a2.3.1.post112 PaddleOCR\uff1a2.6 \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1aPaddleOCR\r\n- python/Version: 3.9.12\r\n- \u4f7f\u7528\u6a21\u578bppocrv3\r\n\r\n\u95ee\u9898\u56fe\u7247\uff1a\r\n![image](https://user-images.githubusercontent.com/34825635/189260271-ee896330-02ea-4290-a6da-a8b16a644be2.png)\r\n\r\n* \u5355\u7528\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u63a8\u7406\u65f6\uff1a\uff08\u6709\u654f\u611f\u4fe1\u606f\u6b64\u5904\u6211\u906e\u6321\u4e86\uff09\r\n`\u524d\u8a00\uff0d\u5ba2\u6237\uff08\u201c\u7532\u65b9\u201d\uff09\u548cXXXXX\uff08\u201c\u4e59\u65b9\u201d\uff09\u6240\u7b7e\u8ba2\u7684\u4e1a\u52a1\u7ea6\u5b9a\u4e66\uff08\u201c\u4e1a\u52a1\u7ea6\u5b9a\u4e66\u201d\uff09\u53ca\u672c\u4e1a\u52a1\u6761\u6b3e\u5176\u540c\u6784\u6210`\r\n* \u4f7f\u7528PaddleOCR\u65f6\uff1a\r\n`\uff08\uff0c\uff09\uff087\uff0c\uff09\u662f\u65f6\uff08\uff0c\uff09\uff0d`\r\n\r\n- \u63a8\u7406\u547d\u4ee4\uff1a\r\n```\r\npython3 tools/infer/predict_rec.py --image_dir=/home/hr/projects/ppocr/PaddleOCR/data/train_data/rec/train/XXXXX.png --rec_model_dir=/home/hr/projects/ppocr/PaddleOCR/output/inference/rec_ppocr_v3_distillation/Teacher --rec_image_shape=\"3, 48, 640\" --rec_char_dict_path=/home/hr/projects/ppocr/PaddleOCR/ppocr/utils/ppocr_keys_v1.txt\r\n```\r\n- \u914d\u7f6e\u6587\u4ef6\u7684\u53c2\u6570\uff1a\r\n```\r\n# \u5bf9image_shape\u8fdb\u884c\u4e86\u66f4\u6539\r\nimage_shape: [3, 48, 640]\r\n```\r\n- PaddleOCR\u7684\u52a0\u8f7d\u53c2\u6570\u8bbe\u7f6e\uff1a\r\n```\r\npaddle_ocr_engine = PaddleOCR(\r\n use_angle_cls=True, \r\n lang=\"ch\", \r\n rec_model_dir=\"./output/inference/rec_ppocr_v3_distillation/Teacher\",\r\n rec_image_shape=\"3, 48, 640\",\r\n rec_char_dict_path=\"./ppocr/utils/ppocr_keys_v1.txt\") \r\n```\r\n\r\n\u5982\u679c\u80fd\u591f\u63d0\u4f9b\u4e00\u4e9b\u5e2e\u52a9\u6216\u8005\u5efa\u8bae\uff0c\u975e\u5e38\u611f\u8c22\uff01", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2e352dcc06ba86159099ec6a2928c7ce556a7245', 'files': [{'path': 'paddleocr.py', 'Loc': {\"('PaddleOCR', '__init__', 445)\": {'mod': [480]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["paddleocr.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "443de01526a1c7108934990c4b646ed992f0bce8", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5209", "iss_label": "status/close", "title": "pdserving \u6700\u540e\u600e\u4e48\u8fd4\u56de\u6587\u672c\u4ee5\u53ca\u6587\u672c\u5750\u6807", "body": "\u76ee\u524dpdserving \u53ea\u8fd4\u56de\u4e86 \u6587\u672c\u6ca1\u6709\u8fd4\u56de\u6587\u672c\u5750\u6807\uff0c\u8bf7\u95ee\u5982\u4f55\u8fd4\u56de\u6587\u672c\u5750\u6807\u5462", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '443de01526a1c7108934990c4b646ed992f0bce8', 'files': [{'path': 'deploy/pdserving/ocr_reader.py', 'Loc': {\"('OCRReader', 'postprocess', 425)\": {'mod': []}}, 'status': 'modified'}, {'path': 'deploy/pdserving/web_service.py', 'Loc': {\"('DetOp', 'postprocess', 57)\": {'mod': [63]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["deploy/pdserving/web_service.py", "deploy/pdserving/ocr_reader.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "ab16f2e4f9a4eac2eeb5f0324ab950b2215780d0", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3735", "iss_label": "", "title": "\u505a\u6570\u5b57\u8bad\u7ec3\u7684\u56fe\u50cf\u3002\u5728\u628a\u68c0\u6d4b\u548c\u8bc6\u522b\u4e32\u8d77\u6765\u7684\u65f6\u5019\uff0c\u8bc6\u522b\u51fa\u6765\u7684\u4e3a\u4ec0\u4e48\u662f\u4e2d\u6587\uff1f", "body": "\u81ea\u5df1\u8bad\u7ec3\u6570\u5b57\u6a21\u578b\uff0c\u7528\u5230\u68c0\u6d4b\u548c\u8bc6\u522b\uff0c\u5728\u8f6cinference\u6a21\u578b\u524d\uff0c\u8bc6\u522b\u7684\u662f\u6570\u5b57\u3002\u4f46\u5c06\u68c0\u6d4b\u548c\u8bc6\u522b\u4e32\u8054\u7684\u65f6\u5019\uff0c\u6309\u7167\u5b98\u65b9\u6559\u7a0b\uff0c\u8f6c\u6362\u6210inference\u6a21\u578b\uff0c\u4e3a\u4ec0\u4e48\u8bc6\u522b\u51fa\u6765\u7684\u662f\u4e2d\u6587\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ab16f2e4f9a4eac2eeb5f0324ab950b2215780d0', 'files': [{'path': 'configs/det/det_mv3_db.yml', 'Loc': {'(None, None, 116)': {'mod': [116]}}, 'status': 'modified'}, {'path': 'tools/infer/predict_det.py', 'Loc': {\"('TextDetector', '__init__', 38)\": {'mod': [42]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/infer/predict_det.py"], "doc": [], "test": [], "config": ["configs/det/det_mv3_db.yml"], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "efc01375c942d87dc1e20856c7159096db16a9ab", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/11715", "iss_label": "", "title": "Can ch_PP-OCRv4_rec_server_infer's support for english be put into the documentation?", "body": "I notice if I am calling\r\n\r\n```\r\nfrom paddleocr import PaddleOCR\r\nocr = Paddle.OCR(\r\ndet_model_dir=ch_PP-OCRv4_det_server_infer,\r\nrec_model_dir=ch_PP-OCRv4_rec_infer\r\nlang='en')\r\n...\r\nresult = ocr.ocr(my_image)\r\n```\r\nthis works fine. However, If i set the rec model to the server version as well (`ch_PP-OCRv4_rec_server_infer`), then I get the following error:\r\n\r\n```\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/paddleocr.py\", line 661, in ocr\r\n dt_boxes, rec_res, _ = self.__call__(img, cls)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/tools/infer/predict_system.py\", line 105, in __call__\r\n rec_res, elapse = self.text_recognizer(img_crop_list)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/tools/infer/predict_rec.py\", line 628, in __call__\r\n rec_result = self.postprocess_op(preds)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 121, in __call__\r\n text = self.decode(preds_idx, preds_prob, is_remove_duplicate=True)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 83, in decode\r\n char_list = [\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 84, in \r\n self.character[text_id]\r\nIndexError: list index out of range\r\n```\r\n\r\nWhich I'm guessing is because it's trying to output Chinese, which has an 8000 character dict, whereas English only has 90 or so. Because it says english is supported by the server model (https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/doc/doc_ch/models_list.md), is it possible to get the ppocrv4 server model to output english successfully? \r\n\"Screen\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'efc01375c942d87dc1e20856c7159096db16a9ab', 'files': [{'path': 'paddleocr.py', 'Loc': {'(None, None, None)': {'mod': [76, 80]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["paddleocr.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "9d44728da81e7d56ea5f437845d8d48bc278b086", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3248", "iss_label": "", "title": "\u68c0\u6d4b\u548c\u8bc6\u522b\u600e\u4e48\u8fde\u63a5", "body": "\u60f3\u7528\u8f7b\u91cf\u5316\u7684\u68c0\u6d4b\u6a21\u578b\u914d\u5408RCNN\u8bc6\u522b\uff0c\u4e0d\u77e5\u9053\u600e\u4e48\u5c06\u4e24\u4e2a\u9636\u6bb5\u8fde\u63a5\u5728\u4e00\u8d77\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9d44728da81e7d56ea5f437845d8d48bc278b086', 'files': [{'path': 'doc/doc_ch/inference.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["doc/doc_ch/inference.md"], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "582e868cf84fca911e195596053f503f890b561b", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/8641", "iss_label": "status/close", "title": "\u8bf7\u5236\u4f5cPP-Structure\u7684PaddleServing\u4f8b\u5b50\u5427", "body": "\u8981\u5199PP-Structure\u5728paddle_serving_server.web_service\u4e2d\u7684Op\u7c7b\uff0c\u611f\u89c9\u6211\u8fd9\u4e2a\u65b0\u624b\u505a\u4e0d\u5230\u554a\u3002\r\n\u6709\u6ca1\u6709\u5927\u795e\u505a\u597d\u4f8b\u5b50\uff0c\u8ba9\u65b0\u624b\u590d\u7528\u5462", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '582e868cf84fca911e195596053f503f890b561b', 'files': [{'path': 'deploy/hubserving/readme.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["deploy/hubserving/readme.md"], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "35449b5c7440f7706e5a4558e5b3efeb76944432", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3844", "iss_label": "", "title": "HOW TO RESUME TRAINING FROM LAST CHECKPOINT?", "body": "Hi,\r\nI have been training a model on my own dataset, How I can resume the training from last checkpoint saved? And also when I train the model does it save Best weights automatically to some path or we need to provide some argument to do it. \r\nPlease help me on this.\r\n\r\nThanks..", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '35449b5c7440f7706e5a4558e5b3efeb76944432', 'files': [{'path': 'tools/program.py', 'Loc': {\"('ArgsParser', '__init__', 39)\": {'mod': [42, 42]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["tools/program.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "adba814904eb4f0aeeec186f158cfb6c212a6e26", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3942", "iss_label": "", "title": "\u6a21\u578b\u5e93404", "body": "ch_ppocr_mobile_slim_v2.1_det \u63a8\u7406\u6a21\u578b\r\nch_ppocr_mobile_v2.1_det \u63a8\u7406\u548c\u8bad\u7ec3\u6a21\u578b\r\n\u4e0a\u9762\u7684\u5230\u76ee\u524d\u662f404\u72b6\u6001\uff0c\u65e0\u6cd5\u4e0b\u8f7d", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'adba814904eb4f0aeeec186f158cfb6c212a6e26', 'files': [{'path': 'doc/doc_ch/models_list.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["doc/doc_ch/models_list.md"], "test": [], "config": [], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "c167df2f60d08085167cdc9431101f4b45a8a019", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/6838", "iss_label": "status/close", "title": "Mac M1 Pro can't install paddleOCR2.0.1~2.5.0.3, but I can install paddleOCR 1.1.1 and run successful.", "body": "\u8bf7\u63d0\u4f9b\u4e0b\u8ff0\u5b8c\u6574\u4fe1\u606f\u4ee5\u4fbf\u5feb\u901f\u5b9a\u4f4d\u95ee\u9898/Please provide the following information to quickly locate the problem\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aMacBook\u00a0Pro\uff0814\u82f1\u5bf8\uff0c2021\u5e74\uff09\uff0cApple M1 Pro 16 GB\uff0c\r\n- \u7248\u672c\u53f7/Version\uff1aPycharm2022.1.2 and Anaconda create Python 3.8 environment.\r\n- Paddle\uff1a Monterey 12.3\r\n- PaddleOCR\uff1a2.0.1~2.5.0.3\r\n- \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1aPaddleOCR\u3001Numpy\r\n- \u8fd0\u884c\u6307\u4ee4/Command Code\uff1a\r\n\r\n1. python3 -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple \uff08\u8fd0\u884c\u6b63\u5e38\uff0cRun OK !\uff09\r\n2. pip install \"paddleocr>=2.0.1\"\uff08\u8fd0\u884c\u5931\u8d25\uff0c\u62a5\u9519\uff0cToo much ERROR\uff01\uff09\uff08\u5982\u679c\u6211\u4e0d\u6307\u5b9apaddleOCR1\u7684\u7248\u672c\u53f7\uff0c\u4f1a\u81ea\u52a8\u5b89\u88c5paddleOCR 1.1.1\uff0c\u5e76\u4e14\u53ef\u4ee5\u6b63\u5e38\u8fd0\u884c\u4f7f\u7528\uff0c\u53732.0.1\u7248\u672c\u5f00\u59cb\u5168\u90e8\u5b89\u88c5\u5931\u8d25\uff09\r\n\r\n- \u5b8c\u6574\u62a5\u9519/Complete Error Message\uff08\u8be6\u89c1markdown\u6587\u6863\uff0c\u592a\u957f\u4e86\uff0c\u8fd9\u91cc\u4f20\u4e0d\u4e0a\u6765\uff09\r\n[\u3010Error Log\u3011Mac M1 Pro can't install paddleOCR2.0.1~2.5.0.3.md](https://github.com/PaddlePaddle/PaddleOCR/files/9075892/Error.Log.Mac.M1.Pro.can.t.install.paddleOCR2.0.1.2.5.0.3.md)\r\n\uff1a\r\n- `ERROR: Cannot install paddleocr==2.0.1, paddleocr==2.0.2, paddleocr==2.0.3, paddleocr==2.0.4, paddleocr==2.0.5, paddleocr==2.0.6, paddleocr==2.2, paddleocr==2.2.0.1, paddleocr==2.2.0.2, paddleocr==2.3, paddleocr==2.3.0.1, paddleocr==2.3.0.2, paddleocr==2.4, paddleocr==2.4.0.1, paddleocr==2.4.0.2, paddleocr==2.4.0.3, paddleocr==2.4.0.4, paddleocr==2.5, paddleocr==2.5.0.2 and paddleocr==2.5.0.3 because these package versions have conflicting dependencies.\r\n\r\nThe conflict is caused by:\r\n paddleocr 2.5.0.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.5.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.5 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.4 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.1 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3.0.1 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.2.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.2.0.1 depends on opencv-contrib-python==4.2.0.32\r\n paddleocr 2.2 depends on opencv-contrib-python==4.2.0.32\r\n paddleocr 2.0.6 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.5 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.4 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.3 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.2 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.1 depends on opencv-python==4.2.0.32\r\n\r\nTo fix this you could try to:\r\n\r\n1. loosen the range of package versions you've specified\r\n2. remove package versions to allow pip attempt to solve the dependency conflict\r\n\r\nERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies`\r\n\"image\"\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c167df2f60d08085167cdc9431101f4b45a8a019', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 10)': {'mod': [10]}}, 'status': 'modified'}, {'path': 'setup.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "e44c2af7622c97d3faecd37b062e7f1cb922fd40", "is_iss": 0, "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/10298", "iss_label": "status/close", "title": "train warning", "body": "\u8bf7\u63d0\u4f9b\u4e0b\u8ff0\u5b8c\u6574\u4fe1\u606f\u4ee5\u4fbf\u5feb\u901f\u5b9a\u4f4d\u95ee\u9898/Please provide the following information to quickly locate the problem\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aubantu \r\n- \u7248\u672c\u53f7/Version\uff1aPaddle\uff1a PaddleOCR\uff1a \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1apaddle develop 0.0.0.post116\r\n\r\n- \u8fd0\u884c\u6307\u4ee4/Command Code\uff1a\r\n- \u5b8c\u6574\u62a5\u9519/Complete Error Message\uff1a\r\n\u4e00\u76f4\u6709\u597d\u591a\u8fd9\u79cd\u8b66\u544a\r\nI0705 11:55:13.443581 28582 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify 'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e44c2af7622c97d3faecd37b062e7f1cb922fd40', 'files': [{'path': 'tools/program.py', 'Loc': {\"(None, 'train', 176)\": {'mod': [349]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/program.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "dc866c91b9191bce083ec908c5665b7f2f40bd17", "is_iss": 0, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/201", "iss_label": "", "title": "gpt 3", "body": "hi\r\ncan we use gpt 3 api free key ?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dc866c91b9191bce083ec908c5665b7f2f40bd17', 'files': [{'path': 'scripts/rerun_edited_message_logs.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/rerun_edited_message_logs.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "5505ec41dd49eb1e86aa405335f40d7a8fa20b0a", "is_iss": 0, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/497", "iss_label": "", "title": "main.py is missing?", "body": "main.py is missing?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5505ec41dd49eb1e86aa405335f40d7a8fa20b0a', 'files': [{'path': 'gpt_engineer/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5\n\u8be2\u95ee\u6587\u4ef6\u7684\u4f4d\u7f6e", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gpt_engineer/"]}} -{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "a55265ddb46462548a842dae914bb5fcb22181fa", "is_iss": 0, "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/509", "iss_label": "", "title": "Error with Promtfile", "body": "When I try to run the example file I get this error even though there is something in the prompt file, which you can see from the screenshots is in the example folder. Does anyone know how I can solve this problem?\r\n\r\n![Screenshot_Error](https://github.com/AntonOsika/gpt-engineer/assets/62028361/cf8c7992-eca9-4bed-b258-bc1bf279082b)\r\n\r\n![Screenshot_of_promt](https://github.com/AntonOsika/gpt-engineer/assets/62028361/a3d573b1-b9da-4201-9980-709c543dadde)\r\n\r\n![image](https://github.com/AntonOsika/gpt-engineer/assets/62028361/dd8edc0d-3248-4d1c-9813-c388f4b81fb5)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a55265ddb46462548a842dae914bb5fcb22181fa', 'files': [{'path': 'projects/example/prompt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["projects/example/prompt"]}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "cca0ca704a713ab153938e78de6787609c723cad", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1147", "iss_label": "", "title": "urllib.error.URLError: \r\nTotal time: 21.12 seconds\r\n\r\nDo you happen to know what could be the problem?\r\n\r\nthanks in advance!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'cca0ca704a713ab153938e78de6787609c723cad', 'files': [{'path': 'troubleshoot.md', 'Loc': {'(None, None, 43)': {'mod': [43]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "lllyasviel", "pro": "misc", "path": ["ip-adapter-plus-face_sdxl_vit-h.bin"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["troubleshoot.md"], "test": [], "config": [], "asset": ["ip-adapter-plus-face_sdxl_vit-h.bin"]}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "fc3588875759328d715fa07cc58178211a894386", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/602", "iss_label": "", "title": "[BUG]Memory Issue when generating images for the second time", "body": "When I generate images first time with one image prompt, everything works fine.\r\nHowever, at the second generation, the GPU memory run out.\r\n\r\nHere is the error\r\n\r\n`Preparation time: 19.46 seconds\r\nloading new\r\nERROR diffusion_model.output_blocks.0.1.transformer_blocks.4.ff.net.0.proj.weight CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 22.41 GiB total capacity; 21.52 GiB already allocated; 11.69 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nERROR diffusion_model.output_blocks.0.1.transformer_blocks.5.attn1.to_v.weight CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 22.41 GiB total capacity; 21.56 GiB already allocated; 11.69 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nERROR diffusion_model.output_blocks.0.1.transformer_blocks.5.attn1.to_out.0.weight CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 22.41 GiB total capacity; 21.56 GiB already allocated; 11.69 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nTraceback (most recent call last):\r\n File \"D:\\Repos\\Fooocus\\modules\\async_worker.py\", line 551, in worker\r\n handler(task)\r\n File \"D:\\Repos\\Fooocus\\venv\\Lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\venv\\Lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\modules\\async_worker.py\", line 460, in handler\r\n comfy.model_management.load_models_gpu([pipeline.final_unet])\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_management.py\", line 397, in load_models_gpu\r\n cur_loaded_model = loaded_model.model_load(lowvram_model_memory)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_management.py\", line 286, in model_load\r\n raise e\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_management.py\", line 282, in model_load\r\n self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_patcher.py\", line 161, in patch_model\r\n temp_weight = comfy.model_management.cast_to_device(weight, device_to, torch.float32, copy=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_management.py\", line 498, in cast_to_device\r\n return tensor.to(device, copy=copy).to(dtype)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 22.41 GiB total capacity; 21.56 GiB already allocated; 11.69 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nTotal time: 209.24 seconds\r\nKeyboard interruption in main thread... closing server.\r\n\r\nProcess finished with exit code -1\r\n`\r\nAt my second attempt to track this error, I add an endpoint. When I clicked generate and it meet the endpoint, I found 6 models have been loaded into memory. May be this is the issue?\r\n\r\n![image](https://github.com/lllyasviel/Fooocus/assets/93906313/bb31e548-3a5c-4a85-9a63-251fdf7584fb)\r\n![47760ff1859b15fa74cf9dea3aa17a5](https://github.com/lllyasviel/Fooocus/assets/93906313/314e07a3-7112-438a-a6f9-bcb24ae4e5a9)\r\n\r\nThanks for help~\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fc3588875759328d715fa07cc58178211a894386', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "f3084894402a4c0b7ed9e7164466bcedd5f5428d", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1508", "iss_label": "", "title": "Problems with installation and correct operation.", "body": "Hello, I had problems installing Fooocus on a GNU/Linux system, many errors occurred during the installation and they were all different. I was not able to capture some of them, but in general terms the errors were as follows: \"could not find versions of python packages that satisfy dependencies (error during installation)\",\"(when clicking the \"generate\" button) \"nvidia drivers were not available found, make sure you have them installed \"link to official website\".\r\n\r\nI managed to save the output of the following errors:\r\n\r\n\r\nERROR: Could not find a version that satisfies the requirement accelerate==0.21.0 (from -r requirements_versions.txt (line 5)) (from versions: 0.0.1, 0.1.0, 0.2.0, 0.2.1, 0.3.0, 0.4.0, 0.5.0, 0.5.1, 0.6.0, 0.6.1, 0.6.2, 0.7.0, 0.7.1, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0.13.2, 0.14.0, 0.15.0, 0.16.0, 0.17.0, 0.17.1, 0.18.0, 0.19.0, 0.20.0, 0.20.1, 0.20.2, 0.20.3)\r\nERROR: No matching distribution found for accelerate==0.21.0 (from -r requirements_versions.txt (line 5))\r\n\r\n\r\n\r\n\r\n\r\npython entry_with_update.py\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py']\r\nPython 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]\r\nFooocus version: 2.1.853\r\nRunning on local URL: http://127.0.0.1:7865\r\n\r\nTo create a public link, set share=True in launch().\r\nTotal VRAM 12006 MB, total RAM 31850 MB\r\nxformers version: 0.0.16\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/Fooocus/ldm_patched/modules/model_management.py\", line 222, in \r\n import accelerate\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/__init__.py\", line 3, in \r\n from .accelerator import Accelerator\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/accelerator.py\", line 35, in \r\n from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/checkpointing.py\", line 24, in \r\n from .utils import (\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/__init__.py\", line 131, in \r\n from .bnb import has_4bit_bnb_layers, load_and_quantize_model\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/bnb.py\", line 42, in \r\n import bitsandbytes as bnb\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/__init__.py\", line 6, in \r\n from .autograd._functions import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py\", line 5, in \r\n import bitsandbytes.functional as F\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/functional.py\", line 13, in \r\n from .cextension import COMPILED_WITH_CUDA, lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 113, in \r\n lib = CUDASetup.get_instance().lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 109, in get_instance\r\n cls._instance.initialize()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 59, in initialize\r\n binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 125, in evaluate_cuda_setup\r\n cuda_version_string = get_cuda_version(cuda, cudart_path)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 45, in get_cuda_version\r\n check_cuda_result(cuda, cudart.cudaRuntimeGetVersion(ctypes.byref(version)))\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 387, in getattr\r\n func = self.getitem(name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 392, in getitem\r\n func = self._FuncPtr((name_or_ordinal, self))\r\nAttributeError: python: undefined symbol: cudaRuntimeGetVersion\r\n\r\nERROR: LOW VRAM MODE NEEDS accelerate.\r\nSet vram state to: NORMAL_VRAM\r\nAlways offload VRAM\r\nDevice: cuda:0 NVIDIA GeForce RTX 4070 : \r\nVAE dtype: torch.float32\r\nUsing xformers cross attention\r\nException in thread Thread-2 (worker):\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1086, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"\", line 1050, in _gcd_import\r\n File \"\", line 1027, in _find_and_load\r\n File \"\", line 1006, in _find_and_load_unlocked\r\n File \"\", line 688, in _load_unlocked\r\n File \"\", line 883, in exec_module\r\n File \"\", line 241, in _call_with_frames_removed\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py\", line 27, in \r\n from ...modeling_utils import PreTrainedModel\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 85, in \r\n from accelerate import version as accelerate_version\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/__init__.py\", line 3, in \r\n from .accelerator import Accelerator\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/accelerator.py\", line 35, in \r\n from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/checkpointing.py\", line 24, in \r\n from .utils import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/__init__.py\", line 131, in \r\n from .bnb import has_4bit_bnb_layers, load_and_quantize_model\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/bnb.py\", line 42, in \r\n import bitsandbytes as bnb\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/__init__.py\", line 6, in \r\n from .autograd._functions import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n\r\nFile \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py\", line 5, in \r\n import bitsandbytes.functional as F\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/functional.py\", line 13, in \r\n from .cextension import COMPILED_WITH_CUDA, lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 113, in \r\n lib = CUDASetup.get_instance().lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 109, in get_instance\r\n cls._instance.initialize()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 59, in initialize\r\n binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 125, in evaluate_cuda_setup\r\n cuda_version_string = get_cuda_version(cuda, cudart_path)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 45, in get_cuda_version\r\n check_cuda_result(cuda, cudart.cudaRuntimeGetVersion(ctypes.byref(version)))\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 387, in getattr\r\n func = self.getitem(name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 392, in getitem\r\n func = self._FuncPtr((name_or_ordinal, self))\r\nAttributeError: python: undefined symbol: cudaRuntimeGetVersion\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/threading.py\", line 1016, in _bootstrap_inner\r\n self.run()\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/threading.py\", line 953, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/home/dragon_flow/Fooocus/modules/async_worker.py\", line 25, in worker\r\n import modules.default_pipeline as pipeline\r\n File \"/home/dragon_flow/Fooocus/modules/default_pipeline.py\", line 1, in \r\n import modules.core as core\r\n File \"/home/dragon_flow/Fooocus/modules/core.py\", line 1, in \r\n from modules.patch import patch_all\r\n File \"/home/dragon_flow/Fooocus/modules/patch.py\", line 29, in \r\n from modules.patch_clip import patch_all_clip\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/Fooocus/modules/patch_clip.py\", line 23, in \r\n from transformers import CLIPTextModel, CLIPTextConfig, modeling_utils, CLIPVisionConfig, CLIPVisionModelWithProjection\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"\", line 1075, in _handle_fromlist\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1077, in getattr\r\n value = getattr(module, name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1076, in getattr\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1088, in _get_module\r\n raise RuntimeError(\r\n\r\nRuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):\r\npython: undefined symbol: cudaRuntimeGetVersion\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f3084894402a4c0b7ed9e7164466bcedd5f5428d', 'files': [{'path': 'requirements_versions.txt', 'Loc': {'(None, None, 5)': {'mod': [5]}}, 'status': 'modified'}, {'path': 'readme.md', 'Loc': {'(None, None, 152)': {'mod': [152]}}, 'status': 'modified'}, {'path': 'troubleshoot.md', 'Loc': {'(None, None, 107)': {'mod': [107]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["readme.md", "troubleshoot.md"], "test": [], "config": ["requirements_versions.txt"], "asset": []}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "225947ac1a603124b0274da3e94d2c6cba65f732", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/500", "iss_label": "", "title": "is this a local model or not", "body": "is this a local model or not\r\n\r\ni dont get how it could show someone elses promts if its local", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '225947ac1a603124b0274da3e94d2c6cba65f732', 'files': [{'path': 'models/checkpoints', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["models/checkpoints"]}} -{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d7439b2d6004d50a0fda19108603a8d1941a185e", "is_iss": 0, "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/3689", "iss_label": "bug\ntriage", "title": "[Bug]: Exits upon attempting to load a model on Windows", "body": "### Checklist\n\n- [X] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)\n- [X] The issue exists on a clean installation of Fooocus\n- [X] The issue exists in the current version of Fooocus\n- [X] The issue has not been reported before recently\n- [ ] The issue has been reported before but has not been fixed yet\n\n### What happened?\n\nAttempting to run Fooocus on Windows 11 (and possibly 10, haven't tested) simply exits when attempting to load the default model, no error or nothing.\n\n### Steps to reproduce the problem\n\n1. Install Fooocus on Windows 11 with a NVIDIA GPU\r\n2. Attempt to run it.\n\n### What should have happened?\n\nIt should've loaded the model successfully.\n\n### What browsers do you use to access Fooocus?\n\nMozilla Firefox\n\n### Where are you running Fooocus?\n\nLocally\n\n### What operating system are you using?\n\nWindows 11 (23H2)\n\n### Console logs\n\n```Shell\n(fooocus_env) D:\\Misc4\\Fooocus>python entry_with_update.py\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py']\r\nPython 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]\r\nFooocus version: 2.5.5\r\n[Cleanup] Attempting to delete content of temp dir C:\\Users\\hkcu\\AppData\\Local\\Temp\\fooocus\r\n[Cleanup] Cleanup successful\r\nTotal VRAM 12281 MB, total RAM 16317 MB\r\nSet vram state to: NORMAL_VRAM\r\nAlways offload VRAM\r\nDevice: cuda:0 NVIDIA GeForce RTX 4070 : native\r\nVAE dtype: torch.bfloat16\r\nUsing pytorch cross attention\r\nRefiner unloaded.\r\nIMPORTANT: You are using gradio version 3.41.2, however version 4.44.1 is available, please upgrade.\r\n--------\r\nRunning on local URL: http://127.0.0.1:7865\r\n\r\nTo create a public link, set `share=True` in `launch()`.\r\n\r\n(fooocus_env) D:\\Misc4\\Fooocus>\n```\n\n\n### Additional information\n\nUsing Fooocus on the exact same machine, with the exact same amount of swap configured (4Gb) works as normal.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd7439b2d6004d50a0fda19108603a8d1941a185e', 'files': [{'path': 'presets/default.json', 'Loc': {'(None, None, 2)': {'mod': [2]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": ["config.txt", "config_modification_tutorial.txt"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\n1", "info_type": "Config\n"}, "loctype": {"code": ["presets/default.json"], "doc": [], "test": [], "config": ["config.txt", "config_modification_tutorial.txt"], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6383113e8527e1c73049e26d2b3482a1b0f54b30", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/376", "iss_label": "", "title": "\u5173\u4e8epublic url", "body": "![Screenshot 2023-04-08 170556](https://user-images.githubusercontent.com/78332286/230713429-e0cc9a3f-1da9-4e76-b24a-67c35624a866.png)\r\n\r\n\u8fd9\u4e2apublic url \u662f\u7ecf\u8fc7\u535a\u4e3b\u81ea\u5df1\u642d\u5efa\u7684\u670d\u52a1\u5668\u7684\u5417\uff1f\u6211\u672c\u5730\u642d\u5efa\u4e4b\u540e\u5728\u624b\u673a\u6253\u5f00\u8fd9\u4e2aurl\u4e5f\u80fd\u7528", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6383113e8527e1c73049e26d2b3482a1b0f54b30', 'files': [{'path': 'main.py', 'Loc': {'(None, None, None)': {'mod': [174]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["main.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6c13bb7b46519312222f9afacedaa16225b673a9", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1545", "iss_label": "ToDo", "title": "[Bug]: Qwen1.5-14B-chat \u8fd0\u884c\u4e0d\u4e86", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOneKeyInstall (\u4e00\u952e\u5b89\u88c5\u811a\u672c-windows)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\nTraceback (most recent call last):\r\n File \".\\request_llms\\local_llm_class.py\", line 158, in run\r\n for response_full in self.llm_stream_generator(**kwargs):\r\n File \".\\request_llms\\bridge_qwen_local.py\", line 46, in llm_stream_generator\r\n for response in self._model.chat_stream(self._tokenizer, query, history=history):\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\anaconda3\\envs\\GPT_academic371\\Lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1688, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'Qwen2ForCausalLM' object has no attribute 'chat_stream'\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\nTraceback (most recent call last):\r\n File \".\\request_llms\\local_llm_class.py\", line 158, in run\r\n for response_full in self.llm_stream_generator(**kwargs):\r\n File \".\\request_llms\\bridge_qwen_local.py\", line 46, in llm_stream_generator\r\n for response in self._model.chat_stream(self._tokenizer, query, history=history):\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\anaconda3\\envs\\GPT_academic371\\Lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1688, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'Qwen2ForCausalLM' object has no attribute 'chat_stream'\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6c13bb7b46519312222f9afacedaa16225b673a9', 'files': [{'path': 'request_llms/bridge_qwen_local.py', 'Loc': {\"('GetQwenLMHandle', 'llm_stream_generator', 34)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_qwen_local.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "dd7a01cda53628ea07ef6192bf257f9ad51f5f47", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/978", "iss_label": "", "title": "[Bug]: \u4ee3\u7406\u914d\u7f6e\u6210\u529f\uff0c\u4ee3\u7406\u6240\u5728\u5730\u67e5\u8be2\u8d85\u65f6\uff0c\u4ee3\u7406\u53ef\u80fd\u65e0\u6548", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nDocker\uff08Windows/Mac\uff09\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6309\u7167\u8981\u6c42\u4fee\u6539\u4ee3\u7406\u914d\u7f6e\u6587\u4ef6`config.py`\uff0c\u57fa\u4e8e`Dockerfile`\u6784\u5efa\u4e4b\u540e\u8fd0\u884c\u51fa\u73b0\uff0c`\u4ee3\u7406\u914d\u7f6e\u6210\u529f\uff0c\u4ee3\u7406\u6240\u5728\u5730\u67e5\u8be2\u8d85\u65f6\uff0c\u4ee3\u7406\u53ef\u80fd\u65e0\u6548`\u7684\u8b66\u544a\u26a0\ufe0f\uff0c\u5b9e\u9645\u8fd0\u884c\u62a5\u9519`ConnectionRefusedError: [Errno 111] Connection refused`\uff0c\u8bf7\u5e2e\u5e2e\u6211\u54ea\u91cc\u914d\u7f6e\u53ef\u80fd\u6709\u8bef\r\nps.\u4ee3\u7406\u670d\u52a1\u5730\u5740\u7aef\u53e3\u914d\u7f6e\u6b63\u786e\uff0c\u4e14\u8fd0\u884c\u6b63\u5e38\uff0c\u53ef\u4ee5\u8bbf\u95ee\u5916\u7f51\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n\"\u622a\u5c4f2023-07-21\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dd7a01cda53628ea07ef6192bf257f9ad51f5f47', 'files': [{'path': 'check_proxy.py', 'Loc': {\"(None, 'check_proxy', 2)\": {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["check_proxy.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "ea4e03b1d892d462f71bab76ee0bec65d541f6b7", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1286", "iss_label": "", "title": "[Feature]: \u8bf7\u95ee\u662f\u5426\u6210\u529f\u4fee\u6539 api2d-gpt-3.5-turbo-16k \u7cfb\u5217\u6a21\u578b max_token \u4e3a 16385 ", "body": "### Class | \u7c7b\u578b\n\n\u5927\u8bed\u8a00\u6a21\u578b\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ea4e03b1d892d462f71bab76ee0bec65d541f6b7', 'files': [{'path': 'request_llms/bridge_all.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_all.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "526b4d8ecd1adbdcf97946b3bca4c89feda6ec04", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/850", "iss_label": "cause of issue is clear", "title": "[Bug]: Json\u5f02\u5e38 \u201cerror\u201d:", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\nTraceback (most recent call last):\r\n File \"./request_llm/bridge_chatgpt.py\", line 189, in predict\r\n if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0][\"delta\"]) == 0):\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nJson\u5f02\u5e38 \u201cerror\u201d: { \u201cmessage\u201d: \u201c\u201d, \u201ctype\u201d: \u201cinvalid_request_error\u201d, \u201cparam\u201d: null, \u201ccode\u201d: \u201cinvalid_api_key\u201d }}\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n\"image\"\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\ngpt-3.5-turbo : 0 : 1 ..........\r\nTraceback (most recent call last):\r\n File \"/Users/zihengli/chatgpt_academic/request_llm/bridge_chatgpt.py\", line 189, in predict\r\n if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0][\"delta\"]) == 0):\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '526b4d8ecd1adbdcf97946b3bca4c89feda6ec04', 'files': [{'path': 'config.py', 'Loc': {'(None, None, None)': {'mod': [1]}}, 'status': 'modified'}, {'path': 'README.md', 'Loc': {'(None, None, 101)': {'mod': [101]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["config.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "fdffbee1b02bd515ceb4519ae2a830a547b695b4", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1137", "iss_label": "", "title": "[Bug]: Connection errored out", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nLinux\n\n### Describe the bug | \u7b80\u8ff0\n\n\u4f60\u597d, \u7248\u672c3.54\r\n\u90e8\u7f72\u5728vps\u4e0a, os\u662fubuntu 20.04\r\n\u6302\u5728\u4e86\u516c\u7f51, \u6b64\u524d\u5747\u53ef\u6b63\u5e38\u4f7f\u7528\r\n\u4f46\u662f\u7a81\u7136\u51fa\u73b0\u4e86\u8fd9\u6837\u7684\u95ee\u9898, \u5982\u4e0b\u56fe\r\n\r\n\u8bf7\u95ee\u8fd9\u662f\u4ec0\u4e48\u539f\u56e0\u5462? \u662f\u8be5vps\u7684ip\u4e0d\u884c, \u88abopenai ban\u4e86\u4e48? \u8fd8\u662f\u4ec0\u4e48\u522b\u7684\u539f\u56e0, \u8c22\u8c22\r\n\r\n\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![Snipaste_2023-09-30_15-01-00](https://github.com/binary-husky/gpt_academic/assets/59535777/9567364a-6bff-4878-b92a-94087a02c655)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fdffbee1b02bd515ceb4519ae2a830a547b695b4', 'files': [{'path': 'main.py', 'Loc': {\"(None, 'main', 3)\": {'mod': [287]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": ["nginx.conf"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0\n2\uff1f", "info_type": "Config"}, "loctype": {"code": ["main.py", "nginx.conf"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "a2002ebd85f441b3cd563bae28e9966006068ad6", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/462", "iss_label": "", "title": "ERROR: Invalid requirement: '__pycache__/' (from line 2 of requirements.txt)", "body": "**Describe the bug \u7b80\u8ff0**\r\nERROR: Invalid requirement: '__pycache__/' (from line 2 of requirements.txt)\r\n**Screen Shot \u622a\u56fe**\r\n![image](https://user-images.githubusercontent.com/46212839/231796758-e537f323-bb03-4fb1-97c8-3b80fddc8476.png)\r\n\r\n![image](https://user-images.githubusercontent.com/46212839/231796688-14d0eb47-8ea7-4d73-9ccd-259b1b10f5df.png)\r\n\r\n**Terminal Traceback \u7ec8\u7aeftraceback\uff08\u5982\u679c\u6709\uff09**\r\n\r\n\r\nBefore submitting an issue \u63d0\u4ea4issue\u4e4b\u524d\uff1a\r\n- Please try to upgrade your code. \u5982\u679c\u60a8\u7684\u4ee3\u7801\u4e0d\u662f\u6700\u65b0\u7684\uff0c\u5efa\u8bae\u60a8\u5148\u5c1d\u8bd5\u66f4\u65b0\u4ee3\u7801\r\n- Please check project wiki for common problem solutions.\u9879\u76ee[wiki](https://github.com/binary-husky/chatgpt_academic/wiki)\u6709\u4e00\u4e9b\u5e38\u89c1\u95ee\u9898\u7684\u89e3\u51b3\u65b9\u6cd5\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a2002ebd85f441b3cd563bae28e9966006068ad6', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "0485d01d67d6a41bb0810d6112f40602af1167a9", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/476", "iss_label": "cause of issue is clear", "title": "\u4e0a\u4f20\u6587\u4ef6\u65f6\u91cd\u590d\u4e0a\u4f20", "body": "\r\n\u4e0a\u4f20\u6587\u4ef6\u65f6\u91cd\u590d\u4e0a\u4f20\r\n\u6837\u4f8b\u6587\u4ef6[1.docx](https://github.com/binary-husky/chatgpt_academic/files/11230280/1.docx)\r\n\u754c\u9762![TE$JF@(Q$565$CWJ4)9(A(P](https://user-images.githubusercontent.com/51219393/231979388-e73140de-f563-40c6-9e97-7f0148505cec.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0485d01d67d6a41bb0810d6112f40602af1167a9', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 1)': {'mod': [1]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e594e1b928aadb36d291184bca1deee8601621a8", "is_iss": 1, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1489", "iss_label": "", "title": "[Bug]: \u867d\u7136PDF\u751f\u6210\u5931\u8d25\u4e86, \u4f46\u8bf7\u67e5\u6536\u7ed3\u679c\uff08\u538b\u7f29\u5305\uff09, \u5185\u542b\u5df2\u7ecf\u7ffb\u8bd1\u7684Tex\u6587\u6863, \u60a8\u53ef\u4ee5\u5230Github Issue\u533a, \u7528\u8be5\u538b\u7f29\u5305\u8fdb\u884c\u53cd\u9988\u3002\u5982\u7cfb\u7edf\u662fLinux\uff0c\u8bf7\u68c0\u67e5\u7cfb\u7edf\u5b57\u4f53\uff08\u89c1Github wiki\uff09 ...", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nAnaconda (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n\u7531\u4e8e\u6700\u4e3a\u5173\u952e\u7684\u8f6c\u5316PDF\u7f16\u8bd1\u5931\u8d25, \u5c06\u6839\u636e\u62a5\u9519\u4fe1\u606f\u4fee\u6b63tex\u6e90\u6587\u4ef6\u5e76\u91cd\u8bd5, \u5f53\u524d\u62a5\u9519\u7684latex\u4ee3\u7801\u5904\u4e8e\u7b2c[-1]\u884c ...\r\n\r\n\u867d\u7136PDF\u751f\u6210\u5931\u8d25\u4e86, \u4f46\u8bf7\u67e5\u6536\u7ed3\u679c\uff08\u538b\u7f29\u5305\uff09, \u5185\u542b\u5df2\u7ecf\u7ffb\u8bd1\u7684Tex\u6587\u6863, \u60a8\u53ef\u4ee5\u5230Github Issue\u533a, \u7528\u8be5\u538b\u7f29\u5305\u8fdb\u884c\u53cd\u9988\u3002\u5982\u7cfb\u7edf\u662fLinux\uff0c\u8bf7\u68c0\u67e5\u7cfb\u7edf\u5b57\u4f53\uff08\u89c1Github wiki\uff09 ...\r\n\r\n\u62a5\u544a\u5df2\u7ecf\u6dfb\u52a0\u5230\u53f3\u4fa7\u201c\u6587\u4ef6\u4e0a\u4f20\u533a\u201d\uff08\u53ef\u80fd\u5904\u4e8e\u6298\u53e0\u72b6\u6001\uff09\uff0c\u8bf7\u67e5\u6536\u3002\r\n[gpt_log\\default_user\\shared\\2024-01-18-14-25-51-result.zip](http://localhost:50649/file=C:/Users/admin/gpt_academic/gpt_log/default_user/shared/2024-01-18-14-25-51-result.zip)\r\n[gpt_log\\default_user\\shared\\2024-01-18-14-25-41.trans.html](http://localhost:50649/file=C:/Users/admin/gpt_academic/gpt_log/default_user/shared/2024-01-18-14-25-41.trans.html)\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![image](https://github.com/binary-husky/gpt_academic/assets/102421741/01fc2c02-ea15-4717-af77-e89797e407d1)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n[2024-01-18-14-25-51-result.zip](https://github.com/binary-husky/gpt_academic/files/13973247/2024-01-18-14-25-51-result.zip)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"path": ".tex"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": [".tex"]}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "9540cf9448026a1c8135c750866b63d320909718", "is_iss": 1, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/257", "iss_label": "", "title": "Something went wrong Connection errored out.", "body": "### Describe the bug\r\n\r\n\u542f\u52a8\u7a0b\u5e8f\u540e\uff0c\u80fd\u6253\u5f00\u9875\u9762\u6b63\u5e38\u663e\u793a\uff0c\u4f46\u662f\u4e0a\u4f20\u6587\u6863\u6216\u8005\u53d1\u9001\u63d0\u95ee\u6cd5\u4f1a\u51fa\u9519\u201cSomething went wrong Connection errored out.\u201d\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [ ] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n\u6309\u7167\u6b63\u5e38\u6b65\u9aa4\uff1a\r\ngit clone https://github.com/binary-husky/chatgpt_academic.git\r\ncd chatgpt_academic\r\npython -m pip install -r requirements.txt \r\npython main.py\r\n\r\nconfig.py\u7684\u914d\u7f6e\u662f\uff1a\r\nUSE_PROXY = True\r\n\r\n### Screenshot\r\n\r\n\"image\"\r\n\"image\"\r\n\u7ed9\u51fa\u4e86\u6b63\u786e\u7684API key\uff0c\u5374\u53d1\u73b0\u4ece\u6ca1\u4f7f\u7528\u8fc7\r\n\"image\"\r\n\r\n\r\n### Logs\r\n\r\n```shell\r\n\u63a7\u5236\u53f0\u62a5\u9519[Error] WebSocket connection to 'ws://localhost:62694/queue/join' failed: There was a bad response from the server. (x4)\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\ngradio:3.24.1\r\nProductName:macOS\r\nProductVersion:13.3\r\nBuildVersion:22E252\r\n```\r\n\r\n\r\n### Severity\r\n\r\nannoying", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "gradio-app", "pro": "gradio", "path": ["gradio/routes.py"]}], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gradio/routes.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "bfa6661367b7592e82225515e5e4845c4aad95bb", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/252", "iss_label": "", "title": "\u80fd\u4e0d\u80fd\u4f7f\u7528azure openai key?", "body": "\u4ee3\u7406\u670d\u52a1\u5668\u4e0d\u591f\u7a33\u5b9a\uff0c\u66f4\u9ebb\u70e6\u7684\u662f\u7ed9openai\u7eed\u8d39\uff0c\u8fd8\u8981\u4e2a\u7f8e\u56fd\u4fe1\u7528\u5361\r\n\r\n\u975e\u5e38\u597d\u7684\u5e94\u7528\uff0c\u5e0c\u671b\u51fa\u66f4\u591a\u7684\u63d2\u4ef6\u529f\u80fd\uff0c\u8c22\u8c22", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'bfa6661367b7592e82225515e5e4845c4aad95bb', 'files': [{'path': 'config.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["config.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "2d2e02040d7d91d2f2a4c34f4d0bf677873b5f4d", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1328", "iss_label": "", "title": "[Bug]: \u7cbe\u51c6\u7ffb\u8bd1PDF\u6587\u6863(NOUGAT)\u529f\u80fd\u51fa\u9519\uff0c", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOthers (Please Describe)\n\n### Version | \u7248\u672c\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6d4b\u8bd5\u670d\u52a1\u5668\uff0c\u7cbe\u51c6\u7ffb\u8bd1PDF\u6587\u6863(NOUGAT)\u529f\u80fd\u51fa\u9519\uff0c\u4f46\u662f\u53ef\u4ee5\u4f7f\u7528\u7cbe\u51c6\u7ffb\u8bd1PDF\u7684\u529f\u80fd\r\n\r\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/b9a0db8c-282a-4e02-a527-97fcf63eaaa0)\r\n\r\n\u62a5\u9519\u4fe1\u606f\u5982\u4e0b\r\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/b40e4d8b-ade9-4e27-86e5-75f6027fbbb0)\r\n\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/ac4995d5-0a68-433b-aa44-2b3c82bbc1e3)\r\nTraceback (most recent call last):\r\n File \"./toolbox.py\", line 159, in decorated\r\n yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n File \"./crazy_functions/\u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863_NOUGAT.py\", line 93, in \u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863\r\n yield from \u89e3\u6790PDF_\u57fa\u4e8eNOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)\r\n File \"./crazy_functions/\u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863_NOUGAT.py\", line 111, in \u89e3\u6790PDF_\u57fa\u4e8eNOUGAT\r\n fpp = yield from nougat_handle.NOUGAT_parse_pdf(fp, chatbot, history)\r\n File \"./crazy_functions/crazy_utils.py\", line 761, in NOUGAT_parse_pdf\r\n raise RuntimeError(\"Nougat\u89e3\u6790\u8bba\u6587\u5931\u8d25\u3002\")\r\nRuntimeError: Nougat\u89e3\u6790\u8bba\u6587\u5931\u8d25\u3002\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2d2e02040d7d91d2f2a4c34f4d0bf677873b5f4d', 'files': [{'path': 'crazy_functions/crazy_utils.py', 'Loc': {\"('nougat_interface', 'NOUGAT_parse_pdf', 739)\": {'mod': [752]}, \"('nougat_interface', None, 719)\": {'mod': [723]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["crazy_functions/crazy_utils.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "17abd29d5035b5b227deaad69d32cf437b23e542", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/94", "iss_label": "", "title": "[\u4e00\u4e9b\u5efa\u8bae]input\u6846\u8fd8\u662f\u592a\u5c0f\u4e86", "body": "RT \u591a\u884c\u8f93\u5165\u8fd8\u662f\u4e0d\u65b9\u4fbf\uff0c\u5982\u679c\u9002\u5f53\u8c03\u6574\u4f1a\u66f4\u597d\u7528\u3002\r\n\r\n\u5e0c\u671b\u91c7\u7eb3\uff0c\u611f\u8c22\u5206\u4eab\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '17abd29d5035b5b227deaad69d32cf437b23e542', 'files': [{'path': 'main.py', 'Loc': {'(None, None, None)': {'mod': [1]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["main.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "37744a9cb173477398a2609f02d5e7cef47eb677", "is_iss": 1, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1438", "iss_label": "", "title": "[Bug]: \u6d6e\u52a8\u8f93\u5165\u6846\u5728\u62d6\u81f3\u9876\u90e8\u540e\uff0c\u65e0\u6cd5\u91cd\u65b0\u79fb\u4f4d", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOthers (Please Describe)\n\n### Version | \u7248\u672c\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6d6e\u52a8\u8f93\u5165\u6846\u5728\u62d6\u81f3\u9876\u90e8\u540e\uff0c\u65e0\u6cd5\u91cd\u65b0\u79fb\u4f4d\r\n\r\n\u671f\u671b\uff1a\u91cd\u65b0\u52fe\u9009\u540e\uff0c\u5e94\u8be5\u56de\u5230\u521d\u59cb\u4f4d\u7f6e\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![2024-01-02 14 36 52](https://github.com/binary-husky/gpt_academic/assets/46100050/86a648dc-ab38-486f-9a0b-7f71dde0bd57)\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gradio-fix/commit/fb67dd12f58aa53c75a90378cddbc811ac3c01d2", "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "binary-husky", "pro": "gradio-fix", "path": ["{'base_commit': 'fb67dd12f58aa53c75a90378cddbc811ac3c01d2', 'files': [{'path': 'js/app/src/components/Floating/StaticFloating.svelte', 'status': 'modified', 'Loc': {'(None, None, 48)': {'add': [48]}}}]}"]}], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gradio-fix", "js/app/src/components/Floating/StaticFloating.svelte"]}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6538c58b8e5a4a7ae08dfa1ae9970bc422158096", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/620", "iss_label": "", "title": "\u60f3\u95ee\u95eenewbing\u7684cookies\u600e\u4e48\u586b\u5199\uff0c\u6211\u4ecejavascript:alert(document.cookie)\u627e\u5230\u4e86cookies\u4f46\u662f\u4e00\u76f4\u663e\u793acookies\u6709\u9519", "body": "![image](https://user-images.githubusercontent.com/73226302/234341095-273ea6e0-aadc-4e19-8966-05709d61f9b1.png)\r\n![image](https://user-images.githubusercontent.com/73226302/234341151-017d0634-620a-4377-b972-ddb2d7a22d2a.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6538c58b8e5a4a7ae08dfa1ae9970bc422158096', 'files': [{'path': 'config.py', 'Loc': {'(None, None, None)': {'mod': [69]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "", "info_type": "Other"}, "loctype": {"code": ["config.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6d8c8cd3f0b9d2b6fe8d412b83f902cbd43fa0bd", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/150", "iss_label": "documentation\nhigh value issue", "title": "\u6709\u6ca1\u6709\u5b8c\u5168\u90e8\u7f72\u6210\u529f\u7684\u5927\u795e\u51fa\u4e2a\u8be6\u7ec6\u7684\u90e8\u7f72\u6b65\u9aa4\u5440\uff1fWindows \u6709\u622a\u56fe\uff0c\u8dea\u6c42", "body": "Windows\u5b89\u88c5\u90e8\u7f72\r\n\u57fa\u672c\u73af\uff1a\u5b89\u88c5anaconda\r\n1.\u4e0b\u8f7d\u9879\u76ee CMD\r\n\u9009\u62e9\u8def\u5f84\r\ngit clone https://github.com/binary-husky/chatgpt_academic.git\r\ncd chatgpt_academic\r\n\u6211\u4eec\u5efa\u8bae\u5c06config.py\u590d\u5236\u4e3aconfig_private.py\u5e76\u5c06\u540e\u8005\u7528\u4f5c\u4e2a\u6027\u5316\u914d\u7f6e\u6587\u4ef6\u4ee5\u907f\u514dconfig.py\u4e2d\u7684\u53d8\u66f4\u5f71\u54cd\u4f60\u7684\u4f7f\u7528\u6216\u4e0d\u5c0f\u5fc3\u5c06\u5305\u542b\u4f60\u7684OpenAI API KEY\u7684config.py\u63d0\u4ea4\u81f3\u672c\u9879\u76ee\u3002\r\ncp config.py config_private.py\r\n2.\u521b\u5efa\u865a\u62df\u73af\u5883 python 3.11\r\nconda create -n chatgpt python=3.11.0 #\u65b0\u5efa\u73af\u5883\u3001\r\n3.\u8fdb\u5165\u9879\u76ee\u4e0b\u8f7d\u8def\u5f84\r\n\u4f8b\u5982 cd G:\\python\\Program\\chatgpt_academic\r\n4.\u542f\u52a8\u865a\u62df\u73af\u5883\r\nconda activate chatgpt\r\n5. \u5b89\u88c5 gradio>=3.23\r\n\uff081\uff09\u5230https://pypi.org/project/gradio/ \u4e0b\u8f7dwhl\u7248\u672c\r\n\uff082\uff09pip install G:\\python\\Program\\chatgpt_academic\\gradio-3.23.0-py3-none-any.whl\r\n6.\u914d\u7f6e\u5176\u4ed6\u73af\u5883\r\n\uff081\uff09\u6253\u5f00requirements.txt\uff0c\u6ce8\u91ca\u6389gradio\uff0c\u7136\u540e\u4fdd\u5b58\r\n\uff082\uff09\u8fd0\u884c python -m pip install -r requirements.txt\r\n7.\u542f\u52a8\u4ee3\u7406\r\n8. \u914d\u7f6econfig_private.py\r\n\uff081\uff09\u6dfb\u52a0API_KEY\r\n\uff082\uff09\u4fee\u6539USE_PROXY = Ture\r\n\uff083\uff09\u4fee\u6539proxies\r\n\u5728\u6d4f\u89c8\u5668\u8f93\u5165: https://ipapi.co/json/\r\n\u6d4f\u89c8\u5668\u4e0a\u53f3\u952e->\u68c0\u67e5->\u7f51\u7edc->ctrl+r\r\n\u6253\u5f00json\uff0c\u5c06\u8fdc\u7a0b\u5730\u5740\u4fee\u6539\u5230proxies = { \"http\": \"104.26.9.44:443\", \"https\": \"104.26.9.44:443\", }\r\n9.\u542f\u52a8\u7a0b\u5e8f\r\npython main.py", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6d8c8cd3f0b9d2b6fe8d412b83f902cbd43fa0bd', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\n+ \nDoc"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e20070939c6c7eeca33a8438041c9e038836957b", "is_iss": 1, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/568", "iss_label": "enhancement", "title": "\u80fd\u5426\u589e\u52a0\u804a\u5929\u5185\u5bb9\u5bfc\u51fa\u529f\u80fd\uff1f", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["gpt_log/chat_secrets.log"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gpt_log/chat_secrets.log"]}} -{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6c448b9a601ba4b9cc84e8bc625a3a91b1982ba4", "is_iss": 0, "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/756", "iss_label": "", "title": "[Bug]: ", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt and python>=3.8)\n\n### Describe the bug | \u7b80\u8ff0\n\n\u53ea\u6709\u51fa\u53bb\u7684\u6d88\u606f\uff0c\u6ca1\u6709\u8fd4\u56de\u6d88\u606f\uff0c\u8bd5\u8fc7\u4e86ap2id\u548cnewbing\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![\u5fae\u4fe1\u622a\u56fe_20230517095200](https://github.com/binary-husky/gpt_academic/assets/43396544/32d9bc41-351b-4ceb-a7d2-99e09b21ddb5)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6c448b9a601ba4b9cc84e8bc625a3a91b1982ba4', 'files': [{'path': 'request_llms/requirements_newbing.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "\u4f9d\u8d56"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["request_llms/requirements_newbing.txt"], "asset": []}} -{"organization": "deepseek-ai", "repo_name": "DeepSeek-V3", "base_commit": "6a30b43249a5710a3adb18c11763222d3fca8756", "is_iss": 0, "iss_html_url": "https://github.com/deepseek-ai/DeepSeek-V3/issues/566", "iss_label": "", "title": "Please provide the code for your model architecture.", "body": "**Is your feature request related to a problem? Please describe.**\nThis repo only provides weights. It makes it difficult to confirm claims from the article.\n\n**Describe the solution you'd like**\n A repo where the code to the model architecture is provided. \n\n**Describe alternatives you've considered**\nClearly state that the model is not open source. \n\n**Additional context**\nNone\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6a30b43249a5710a3adb18c11763222d3fca8756', 'files': [{'path': 'inference/model.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["inference/model.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepseek-ai", "repo_name": "DeepSeek-V3", "base_commit": "0d16ea24c8030a30d4fe8a75b28e05b03b4e0970", "is_iss": 1, "iss_html_url": "https://github.com/deepseek-ai/DeepSeek-V3/issues/210", "iss_label": "", "title": "[BUG]convert\u540e\u8fd0\u884c\u9519\u8bef", "body": "**Describe the bug**\r\n[rank0]: ValueError: Unrecognized model in ../DV3-hf-32/. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, audio-spectrogram-transformer, autoformer, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, git, glm, glpn, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, graphormer, grounding-dino, groupvit, hiera, hubert, ibert, idefics, idefics2, idefics3, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rwkv, sam, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, siglip, siglip_vision_model, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, time_series_transformer, timesformer, timm_backbone, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zoedepth\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior.\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["tokenizer.json", "tokenizer_config.json"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["tokenizer_config.json", "tokenizer.json"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "c529bd4f1cb3a8abc53574b7211fc0b887107073", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/98", "iss_label": "wontfix", "title": "IndexError: list index out of range on training", "body": "```\r\n# python3.6 faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/stalone -m ~/faceswap/models/\r\nModel A Directory: /root/faceswap/data/trump\r\nModel B Directory: /root/faceswap/data/stalone\r\nTraining data directory: /root/faceswap/models\r\nLoading data, this may take a while...\r\nLoading Model from Model_Original plugin...\r\n/usr/local/lib/python3.6/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\nUsing TensorFlow backend.\r\n/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6\r\n return f(*args, **kwds)\r\nFailed loading existing training data.\r\nUnable to open file (unable to open file: name = '/root/faceswap/models/encoder.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)\r\nLoading Trainer from Model_Original plugin...\r\nStarting. Press \"Enter\" to stop training and save model\r\nException in thread Thread-2:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/threading.py\", line 916, in _bootstrap_inner\r\n self.run()\r\n File \"/root/faceswap/lib/utils.py\", line 42, in run\r\n for item in self.generator:\r\n File \"/root/faceswap/lib/training_data.py\", line 43, in minibatch\r\n rtn = numpy.float32([read_image(data[j]) for j in range(i,i+size)])\r\n File \"/root/faceswap/lib/training_data.py\", line 43, in \r\n rtn = numpy.float32([read_image(data[j]) for j in range(i,i+size)])\r\nIndexError: list index out of range\r\n```\r\n\r\n## Expected behavior\r\nThere shouldn't be \"IndexError: list index out of range\"\r\n\r\n## Actual behavior\r\n\r\n*Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.*\r\n\r\n## Steps to reproduce\r\n\r\n## Other relevant information\r\nH/W: 4 cores, 16GB, Nvidial P100\r\nS/W: Ubuntu 16.04, NVIDIA binary driver - version 384.111\r\nCUDA 8.0\r\nCuDNN 6\r\nPython 3.6\r\nfaceswap commit: 0f8d9db826d7588f9feb151ab234f2aaf0d8ecf2\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c529bd4f1cb3a8abc53574b7211fc0b887107073', 'files': [{'path': 'lib/training_data.py', 'Loc': {\"(None, 'minibatch', 33)\": {'mod': [38]}}, 'status': 'modified'}, {'path': 'lib/cli/args_train.py', 'Loc': {\"('TrainArgs', 'get_argument_list', 35)\": {'mod': [140]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/cli/args_train.py", "lib/training_data.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "183aee37e93708c0ae73845face5b4469319ebd3", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1208", "iss_label": "", "title": "[Question] Which part of code to implement 'Configure Settings' GUI?", "body": "Which part of code to implement 'Configure Settings' GUI?\r\n\r\n![a](https://user-images.githubusercontent.com/32773605/152643917-b26f4b16-71e0-4f9a-8209-93206355f1b6.jpg)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '183aee37e93708c0ae73845face5b4469319ebd3', 'files': [{'path': 'lib/gui/popup_configure.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/gui/popup_configure.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "3ba44f75518e8010befab88042247e5147d0f212", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/15", "iss_label": "question\ndata", "title": "do i have to rename the given training data to src? ", "body": "if not, where to put the unzip data into directory. sorry for asking newby questions. \r\ni am using pycharm and docker. thanks\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '3ba44f75518e8010befab88042247e5147d0f212', 'files': [{'path': 'convert_trump_cage.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["convert_trump_cage.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "68ef3b992674d87d0c73da9c29a4c5a0e735f04b", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/101", "iss_label": "", "title": "help me", "body": "virtualenv '/home/test/Desktop/faceswap-master'\r\npython3.5 '/home/test/Desktop/faceswap-master/faceswap.py' -h\r\n\r\ntest@ubuntu:~$ virtualenv '/home/test/Desktop/faceswap-master'\r\nNew python executable in /home/test/Desktop/faceswap-master/bin/python\r\nInstalling setuptools, pip, wheel...done.\r\ntest@ubuntu:~$ python3.5 '/home/test/Desktop/faceswap-master/faceswap.py' -h\r\nTraceback (most recent call last):\r\n File \"/home/test/Desktop/faceswap-master/faceswap.py\", line 8, in \r\n from lib.utils import FullHelpArgumentParser\r\n File \"/home/test/Desktop/faceswap-master/lib/utils.py\", line 5, in \r\n from scandir import scandir\r\nImportError: No module named 'scandir'\r\ntest@ubuntu:~$ \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '68ef3b992674d87d0c73da9c29a4c5a0e735f04b', 'files': [{'path': 'requirements-gpu.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements-gpu.txt"], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "629c02a61e1ad5f769f8f7388a091d5ce9aa8160", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1254", "iss_label": "", "title": "Can't Open GUI on Windows", "body": "**Describe the bug**\r\nWhenever I try to open the GUI of Faceswap, I get an error and it doesn't open. I am on Windows, and I have uninstalled and reinstalled multiple times, including redoing the conda environment. CLI functions work, but the main GUI does not open, either from the shortcut or a manual terminal run. I have also tried running with and without admin\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Uninstall old Faceswap versions\r\n2. Install the latest windows version\r\n3. Run the Faceswap program in GUI mode\r\n4. See error\r\n\r\n**Expected behavior**\r\nI want the Faceswap GUI to open. It doesn't.\r\n\r\n**Screenshots**\r\n![image](https://user-images.githubusercontent.com/63259343/183342692-6f1c1bec-df9b-4f71-8a23-f2f77fccc008.png)\r\n![image](https://user-images.githubusercontent.com/63259343/183342835-998834e0-66d6-4751-84e9-a1c150e22063.png)\r\n\r\n\r\n**Desktop:**\r\n - OS: [Windows 11]\r\n - Python Version [3.9.12]\r\n - Conda Version [4.13.0]\r\n - Commit ID [6b2aac6]\r\n\r\n\r\n**Crash Report**\r\n[crash_report.2022.08.07.224753577271.log](https://github.com/deepfakes/faceswap/files/9278810/crash_report.2022.08.07.224753577271.log)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '629c02a61e1ad5f769f8f7388a091d5ce9aa8160', 'files': [{'path': 'requirements/_requirements_base.txt', 'Loc': {'(None, None, 15)': {'mod': [15]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements/_requirements_base.txt"], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9696b5606fd0963814fc0c3644565aa60face69d", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/462", "iss_label": "", "title": "Modify extractor to focus on mouth", "body": "I'd like to modify the extractor script to focus on the lower half of the face - specifically the mouth area. \r\n\r\nI'm experimenting with changing people's mouth movements, and I want to train a higher resolution \"mouth only\" network, so I can create new speech patterns that are re-composited onto the original footage. \r\n\r\nIs there a way to modify which facial landmarks the extractor looks at so it just takes the mouth?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9696b5606fd0963814fc0c3644565aa60face69d', 'files': [{'path': 'lib/aligner.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/aligner.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9fb70f13552927bea1bf65fe35f4866f99171eaf", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/656", "iss_label": "", "title": "Not showing graph in gui", "body": "in log gui:\r\n`Exception in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/tkinter/__init__.py\", line 1705, in __call__\r\n return self.func(*args)\r\n File \"/home/telecast/Documents/faceswap/lib/gui/command.py\", line 461, in \r\n command=lambda cmd=action: cmd(self.command))\r\n File \"/home/telecast/Documents/faceswap/lib/gui/utils.py\", line 550, in load\r\n self.add_to_recent(cfgfile.name, command)\r\n File \"/home/telecast/Documents/faceswap/lib/gui/utils.py\", line 596, in add_to_recent\r\n recent_files = self.serializer.unmarshal(inp.read().decode(\"utf-8\"))\r\n File \"/home/telecast/Documents/faceswap/lib/Serializer.py\", line 61, in unmarshal\r\n return json.loads(input_string)\r\n File \"/usr/lib/python3.6/json/__init__.py\", line 354, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/lib/python3.6/json/decoder.py\", line 339, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/usr/lib/python3.6/json/decoder.py\", line 357, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n`\r\n\r\n faceswap.log:\r\n`03/10/2019 00:02:36 MainProcess training_0 train training INFO Loading data, this may take a while...\r\n03/10/2019 00:02:36 MainProcess training_0 plugin_loader _import INFO Loading Model from Villain plugin...\r\n03/10/2019 00:02:40 MainProcess training_0 config load_config INFO Loading config: '/home/telecast/Documents/faceswap/config/train.ini'\r\n03/10/2019 00:02:40 MainProcess training_0 _base replace_config INFO Using configuration saved in state file\r\n03/10/2019 00:02:40 MainProcess training_0 deprecation new_func WARNING From /home/telecast/Documents/faceswap_env/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\\nInstructions for updating:\\nColocations handled automatically by placer.\r\n03/10/2019 00:02:49 MainProcess training_0 _base load WARNING Failed loading existing training data. Generating new models\r\n03/10/2019 00:02:52 MainProcess training_0 plugin_loader _import INFO Loading Trainer from Original plugin...\r\n03/10/2019 00:02:54 MainProcess training_0 _base set_tensorboard INFO Enabled TensorBoard Logging\r\n03/10/2019 00:02:54 MainProcess training_0 deprecation new_func WARNING From /home/telecast/Documents/faceswap_env/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\\nInstructions for updating:\\nUse tf.cast instead.\r\n03/10/2019 00:03:35 MainProcess training_0 _base save_models INFO saved models\r\n03/10/2019 00:04:29 MainProcess MainThread train end_thread INFO Exit requested! The trainer will complete its current cycle, save the models and quit (it can take up a couple of seconds depending on your training speed). If you want to kill it now, press Ctrl + c\r\n03/10/2019 00:04:31 MainProcess training_0 _base save_models INFO saved models`\r\n\r\n$ cat /etc/*release\r\nDISTRIB_ID=Ubuntu\r\nDISTRIB_RELEASE=18.10\r\n\r\n$ pip3 list\r\nPackage Version \r\n----------------------- --------\r\nabsl-py 0.7.0 \r\nastor 0.7.1 \r\nClick 7.0 \r\ncloudpickle 0.8.0 \r\ncmake 3.13.3 \r\ncycler 0.10.0 \r\ndask 1.1.3 \r\ndecorator 4.3.2 \r\ndlib 19.16.0 \r\nface-recognition 1.2.3 \r\nface-recognition-models 0.3.0 \r\nffmpy 0.2.2 \r\ngast 0.2.2 \r\ngrpcio 1.19.0 \r\nh5py 2.9.0 \r\nKeras 2.2.4 \r\nKeras-Applications 1.0.7 \r\nKeras-Preprocessing 1.0.9 \r\nkiwisolver 1.0.1 \r\nMarkdown 3.0.1 \r\nmatplotlib 2.2.2 \r\nmock 2.0.0 \r\nnetworkx 2.2 \r\nnumpy 1.15.4 \r\nnvidia-ml-py3 7.352.0 \r\nopencv-python 4.0.0.21\r\npathlib 1.0.1 \r\npbr 5.1.3 \r\nPillow 5.4.1 \r\npip 19.0.3 \r\nprotobuf 3.7.0 \r\npsutil 5.6.0 \r\npyparsing 2.3.1 \r\npython-dateutil 2.8.0 \r\npytz 2018.9 \r\nPyWavelets 1.0.2 \r\nPyYAML 3.13 \r\nscikit-image 0.14.2 \r\nscikit-learn 0.20.3 \r\nscipy 1.2.1 \r\nsetuptools 40.8.0 \r\nsix 1.12.0 \r\ntensorboard 1.13.1 \r\ntensorflow-estimator 1.13.0 \r\ntensorflow-gpu 1.13.1 \r\ntermcolor 1.1.0 \r\ntoolz 0.9.0 \r\ntqdm 4.31.1 \r\nWerkzeug 0.14.1 \r\nwheel 0.33.1\r\n\r\n$ nvcc --version\r\nnvcc: NVIDIA (R) Cuda compiler driver\r\nCopyright (c) 2005-2018 NVIDIA Corporation\r\nBuilt on Sat_Aug_25_21:08:01_CDT_2018\r\nCuda compilation tools, release 10.0, V10.0.130\r\n\r\n cudnn v7.4.1.5\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9fb70f13552927bea1bf65fe35f4866f99171eaf', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "2", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "e518206c8ef935ebc1b1ff64ae2901cc8ef05f94", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/57", "iss_label": "", "title": "Cannot install tensorflow-gpu requirement", "body": "\r\nTried installing the requirements-gpu.txt and get this error:\r\n\r\nCollecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) Cache entry deserialization failed, entry ignored Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: ) No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n\r\nI went here to troubleshoot the issue: https://github.com/tensorflow/tensorflow/issues/8251\r\nInstalled Python 64bit. Opened new command prompt window and typed in: pip3 install --upgrade tensorflow-gpu\r\n\r\nSuccessfully uninstalled setuptools-28.8.0\r\nSuccessfully installed bleach-1.5.0 enum34-1.1.6 html5lib-0.9999999 markdown-2.6.11 numpy-1.13.3 protobuf-3.5.1 setuptools-38.4.0 six-1.11.0 tensorflow-gpu-1.4.0 tensorflow-tensorboard-0.4.0rc3 werkzeug-0.14.1 wheel-0.30.0\r\n\r\nWent back to my faceswap env to enter the requirements-gpu.txt and still get the same error:\r\n(faceswap) C:\\faceswap>pip install -r requirements-gpu.txt\r\nCollecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: )\r\nNo matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n\r\n## Other relevant information\r\n\r\n- **Operating system and version:** Windows 10\r\nPython 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32\r\n- **Faceswap version:** 1/5/2018\r\n- **Faceswap method:** CPU/GPU \"CPU method only works\"\r\n- ...\r\n\r\n ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e518206c8ef935ebc1b1ff64ae2901cc8ef05f94', 'files': [{'path': 'requirements-gpu.txt', 'Loc': {'(None, None, 6)': {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements-gpu.txt"], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "51f1993d93e0ffb581d44416f327f0cf731c34e8", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/209", "iss_label": "", "title": "doesn't work on 2GB GTX 960 even with LowMem model (what params could be reduced?)", "body": "LowMem is different from the common model with 2 lines:\r\nENCODER_DIM = 512 # instead of 1024\r\n#x = self.conv(1024)(x) - commented out.\r\n\r\nBut it's still not enough to run under Ubuntu 16.04, cuda8, 1.7Gb of free video RAM.\r\nIt fails with OOM on any batch size, even with bs=1 and bs=2.\r\n\r\nWhat about having some configurable params here? Like reducing filters numbers or ENCODER_DIM or smth else? \r\nAlso that would be great to have some doc which describes few main params and their influence on quality etc. For example fakeapp allows to select number of layers, nodes etc.\r\n\r\nP.S. I managed to run it with ENCODER_DIM = 64 and bs=16, but results are not so good (after 15 hours).\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '51f1993d93e0ffb581d44416f327f0cf731c34e8', 'files': [{'path': 'faceswap.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["faceswap.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd", "is_iss": 0, "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1361", "iss_label": "", "title": "Bounding boxes coordinates", "body": "It has been 2 weeks I have been working on it but cannot find the solution.\r\n\r\nI want the bounding boxes on the original image, of the result that is produced by the \"Extract\" process of faceswap code.\r\n\r\n\"Extract\" writes the faces extracted from the input image(s). I just want the coordinates from which this face is extracted (from original image).\r\n\r\nIf you could help me. I would be very grateful and would also help other people searching for the same problem.\r\nThank you.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd', 'files': [{'path': 'lib/align/detected_face.py', 'Loc': {\"('DetectedFace', '__init__', 82)\": {'mod': [84, 85, 86, 87]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/align/detected_face.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/659", "iss_label": "", "title": "Problem with FadeOutAndShift", "body": "t3 text is not going through FadeOutAndShift.\r\nAlso tell me how I can FadeOutAndShift t1 and t3 together\r\n\r\n```# python -m manim try3.py test1 -pm\r\n\r\nfrom manimlib.imports import *\r\n\r\nclass test1(Scene):\r\n\tdef construct(self):\r\n\t\tt1=TextMobject(\"Hi!\")\r\n\t\tt2=TextMobject(\"My name is\")\r\n\t\tt3=TextMobject(\"Girish\")\r\n\r\n\t\tt1.set_color(RED)\r\n\t\tt3.set_color(BLUE)\r\n\r\n\t\tself.play(Write(t1), run_time=2)\r\n\t\tself.play(ApplyMethod(t1.shift, 1*UP))\r\n\t\tself.play(FadeIn(t2))\r\n\t\tself.play(Transform(t2, t3), run_time=2)\r\n\t\tself.wait(2)\r\n\t\tself.play(FadeOutAndShift(t1))\r\n self.play(FadeOutAndShift(t3))\r\n\t\t\r\n\r\n\t\t\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '49582c35919097585699598ad0ca49fe3f2117b5', 'files': [{'path': 'manimlib/scene/scene.py', 'Loc': {\"('Scene', 'play', 455)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/scene/scene.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "ce06e58505dff26cccd497a9bd43969f74ae0da9", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/274", "iss_label": "", "title": "ImportError: No module named animation", "body": "I've installed manim on Win10. After run \"python extract_scene.py -s example_scenes.py\",\r\n\r\nthe next error is shown in the python interactive interpretor:\r\n\r\n> Traceback (most recent call last):\r\n File \"extract_scene.py\", line 15, in \r\n from scene.scene import Scene\r\n File \"G:\\python\\manim\\scene\\scene.py\", line 16, in \r\n from animation.transform import MoveToTarget\r\n File \"G:\\python\\manim\\animation\\transform.py\", line 8, in \r\n from animation.animation import Animation\r\nImportError: No module named animation\r\n\r\nWhat I can do? I'm looking forward to get help to solve this problem. ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ce06e58505dff26cccd497a9bd43969f74ae0da9', 'files': [{'path': 'animation/transform.py', 'Loc': {'(None, None, None)': {'mod': [8]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["animation/transform.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "55ece141e898577ce44e71d718212a1ee816ed74", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/658", "iss_label": "", "title": "How to add sound to video?", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '55ece141e898577ce44e71d718212a1ee816ed74', 'files': [{'path': 'manimlib/scene/scene.py', 'Loc': {\"('Scene', 'add_sound', 543)\": {'mod': []}}, 'status': 'modified'}, {'path': 'old_projects/clacks/solution2/simple_scenes.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["old_projects/clacks/solution2/simple_scenes.py", "manimlib/scene/scene.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "97a0a707d759e0235450ea8c20f55a2529bd2973", "is_iss": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/878", "iss_label": "", "title": "Swedish characters not working", "body": "\r\n\r\nInclude at least:\r\n1. Steps to reproduce the issue (e.g. the command you ran)\r\n2. The unexpected behavior that occurred (e.g. error messages or screenshots)\r\n3. The environment (e.g. operating system and version of manim)\r\n\r\nI am new to manim and want to include swedish characters in a text, but it gives an error message when rendering.\r\nCode:\r\nclass Swe(Scene):\r\n\tdef construct(self):\r\n\t\ttext = TextMobject(r\"$\\\"o$\")\r\n\t\tself.add(text)\r\n\t\tself.wait()\r\n\r\nError message:\r\nTraceback (most recent call last):\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\extract_scene.py\", line 153, in main\r\n scene = SceneClass(**scene_kwargs)\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\scene\\scene.py\", line 54, in __init__\r\n self.construct()\r\n File \"Geony.py\", line 115, in construct\r\n text = TextMobject(r\"$\\\"o$\")\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\mobject\\svg\\tex_mobject.py\", line 144, in __init__\r\n self, self.arg_separator.join(tex_strings), **kwargs\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\mobject\\svg\\tex_mobject.py\", line 45, in __init__\r\n self.template_tex_file_body\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\utils\\tex_file_writing.py\", line 19, in tex_to_svg_file\r\n dvi_file = tex_to_dvi(tex_file)\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\utils\\tex_file_writing.py\", line 67, in tex_to_dvi\r\n \"See log output above or the log file: %s\" % log_file)\r\nException: Latex error converting to dvi. See log output above or the log file: C:\\Manim\\manim\\manim2020\\manimlib\\files\\Tex\\a26fbd67dc90adbc.log\r\n\r\nI am running python 3.7 (64 bit) and MikTex 2.9. All other features of manim are working fine.\r\nAny help would be much appreciated. Also, please keep in mind that I am new to manim and programing in general.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [12], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "6880ebcbc2525b2f3c0731439bef7ff981b4b5b4", "is_iss": 1, "iss_html_url": "https://github.com/3b1b/manim/issues/924", "iss_label": "", "title": "Reconsidering TEX_USE_CTEX / using XeLaTeX", "body": "I worked on manim back in 2018. I added the function for using CTeX (XeLaTeX package for Chinese) and XeLaTeX instead of LaTeX using the flag `TEX_USE_CTEX` in constants.py (#315).\r\n\r\nI have stopped working on manim since 2019, but over the months there are apparently more and more people who want to use LaTeX rendering in non-English languages, and even on very old issues I still occasionally see people asking how to do that... Looking back at my change I really should have **decoupled using CTeX (TeX template) from XeLaTeX (rendering tool)**. This has caused a *lot* of confusions and made weird hacks/fixes necessary for only using XeLaTeX, especially for a language that is not Chinese or English, with the most recent #858 and #840. It really should have been a flag `TEX_USE_XELATEX` and another flag `TEMPLATE_TEX_NAME`, and the flag `TEX_USE_CTEX` is such that when it is `True`, `TEX_USE_XELATEX` is `True` and `TEMPLATE_TEX_NAME` is `\"ctex_template.tex\"`; otherwise `TEX_USE_XELATEX` is `False` and `TEMPLATE_TEX_NAME` is `\"tex_template.tex\"`. Then set `TEMPLATE_TEX_FILE` to `os.path.join(os.path.dirname(os.path.realpath(__file__)), TEMPLATE_TEX_NAME)`. Corresponding logic: constants.py lines 74\u201379.\r\n\r\nIt might be even better to set it dynamically using a function or as a parameter of `TexMobject()`, (see issues like #891). I looked at the source code and this is definitely possible. The options I can think of are\r\n1. Use the current `TEX_USE_CTEX`\r\n2. Add flags `TEX_USE_XELATEX` and `TEMPLATE_TEX_NAME`, and rework `TEX_USE_CTEX`\r\n3. Add parameters for `TexMobject()` like `use_xelatex=False` and `tex_template=\"tex_template.tex\"`\r\n4. Use the flags of 2. as a default, and make it possible to change the default using 3.\r\n\r\nNot really sure if this is the right place to raise this issue.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "ManimCommunity"}, {"pro": "manim", "path": ["manim/utils/tex_templates.py"]}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manim/utils/tex_templates.py"], "doc": [], "test": [], "config": [], "asset": ["ManimCommunity"]}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/660", "iss_label": "", "title": "ColorByCaracter help ", "body": "I want to color only theta of ```{ e }^{ i\\theta }```\r\n\r\nI was going through ColorByCaracter in 3_text_like_arrays.py . \r\nBut I fail to understand how you people separate the tex formula into arrays. I know about arrays but I can only copy the tex code from [Daum Equation Editor](http://s1.daumcdn.net/editor/fp/service_nc/pencil/Pencil_chromestore.html) and paste it. I don't know how to divide them into arrays.\r\n\r\nPlease help me.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '49582c35919097585699598ad0ca49fe3f2117b5', 'files': [{'path': 'manimlib/mobject/svg/tex_mobject.py', 'Loc': {\"('TexMobject', None, 132)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/svg/tex_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "32abbb9371308e8dff7410de387fe78e64b6fe7a", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/700", "iss_label": "", "title": "OSError: No file matching Suv.svg in image directory", "body": "I've tried putting the .SVG image into */media/designs/svg_images. But when I want to quote it in the .py file it still reports errors:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jason/Documents/manim/manimlib/extract_scene.py\", line 155, in main\r\n scene = SceneClass(**scene_kwargs)\r\n File \"/home/jason/Documents/manim/manimlib/scene/scene.py\", line 53, in __init__\r\n self.construct()\r\n File \"SVGTEST.py\", line 44, in construct\r\n height=height_size\r\n File \"/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py\", line 45, in __init__\r\n self.ensure_valid_file()\r\n File \"/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py\", line 63, in ensure_valid_file\r\n self.file_name)\r\nOSError: No file matching MYSVG.svg in image directory\r\n\r\n```\r\n(Manjaro Linux, Texlive)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '32abbb9371308e8dff7410de387fe78e64b6fe7a', 'files': [{'path': 'manimlib/mobject/svg/svg_mobject.py', 'Loc': {\"('SVGMobject', 'ensure_valid_file', 49)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/svg/svg_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "b74e5ca254bccc1575b4c7b7de3c1cb2010aac75", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/694", "iss_label": "", "title": "can't graph trigonometric function of secx, cscx, cotx, tanx,...", "body": "source code:\r\n\r\nclass PlotFunctions(GraphScene):\r\n CONFIG = {\r\n \"x_min\" : -10,\r\n \"x_max\" : 10.3,\r\n \"y_min\" : -1.5,\r\n \"y_max\" : 1.5,\r\n \"graph_origin\" : ORIGIN ,\r\n \"function_color\" : RED ,\r\n \"axes_color\" : GREEN,\r\n \"x_labeled_nums\" :range(-10,12,2),\r\n\r\n }\r\n def construct(self):\r\n self.setup_axes(animate=True)\r\n func_graph=self.get_graph(self.func_to_graph,self.function_color)\r\n func_graph2=self.get_graph(self.func_to_graph2)\r\n vert_line = self.get_vertical_line_to_graph(TAU,func_graph,color=YELLOW)\r\n graph_lab = self.get_graph_label(func_graph, label = \"\\\\cos(x)\")\r\n graph_lab2=self.get_graph_label(func_graph2,label = \"\\\\sin(x)\", x_val=-10, direction=UP/2)\r\n two_pi = TexMobject(\"x = 2 \\\\pi\")\r\n label_coord = self.input_to_graph_point(TAU,func_graph)\r\n two_pi.next_to(label_coord,RIGHT+UP)\r\n\r\n\r\n\r\n self.play(ShowCreation(func_graph),ShowCreation(func_graph2))\r\n self.play(ShowCreation(vert_line), ShowCreation(graph_lab), ShowCreation(graph_lab2),ShowCreation(two_pi))\r\n\r\n\r\n def func_to_graph(self,x):\r\n #return np.cos(x)\r\n return np.tan(x)\r\n\r\n def func_to_graph2(self,x):\r\n return np.sin(x)\r\n\r\nI replaced \"return np.cos(x)\" to \"return np.tan(x)\"...i got this:\r\n![image](https://user-images.githubusercontent.com/36161299/63267544-e140a700-c2c4-11e9-9164-a14d37ee8673.png)\r\n\r\nand then I replaced \"return np.cos(x)\" to \"return np.sec(x)/cot(x)/csc(x)\"...i got this:\r\nAttributeError: module 'numpy' has no attribute 'sec'...\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b74e5ca254bccc1575b4c7b7de3c1cb2010aac75', 'files': [{'path': 'manimlib/mobject/types/vectorized_mobject.py', 'Loc': {\"('VGroup', None, 868)\": {'mod': []}}, 'status': 'modified'}, {'Loc': [17], 'path': None}]}", "own_code_loc": [{"Loc": [17], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "loctype": {"code": [null, "manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "fc153bb49a529e8cbb02dd1514f06387cbf0ee6e", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/1206", "iss_label": "", "title": "Manim can't find my png file", "body": "I'm new to coding and am trying to learn manim, which I'm using on my macbook pro. I'm trying to create a scene where manim draws a png file I saved. I saved the png file as \"shirt.png\" in my manim folder. I then ran the following code:\r\n\r\n\r\n```\r\nfrom manimlib.imports import *\r\n\r\nclass OutFit(Scene):\r\n\tdef construct(self):\r\n\t\t\r\n\t\tshirt = ImageMobject(\"shirt\")\r\n\t\t\r\n\t\tself.play(Write(shirt))\r\n```\r\nI've looked up several ways of how to get manim to do images and some solutions, but since I'm pretty new at this I don't always understand the answers I've found from other people's issues or if it applies to mine. I keep getting this error response:\r\n\r\nraise IOError(\"File {} not Found\".format(file_name))\r\nOSError: File shirt not Found\r\n\r\nAny help is much appreciated. \r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fc153bb49a529e8cbb02dd1514f06387cbf0ee6e', 'files': [{'path': 'manimlib/animation/fading.py', 'Loc': {\"('FadeIn', None, 34)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/animation/fading.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "3b1b", "repo_name": "manim", "base_commit": "64c960041b5b9dcb0aac50019268a3bdf69d9563", "is_iss": 0, "iss_html_url": "https://github.com/3b1b/manim/issues/608", "iss_label": "", "title": "What is VMobject exactly?", "body": "Can anyone explain what is the purpose of `VMobject` and how it differs from `Mobject`?\r\n\r\nI am trying to make some `old_projects` work. For example, I had to change `PMobject` to inherit from `VMobject` instead of `Mobject` in order to fix `NumberLineScene`. I do not know if it is correct thing to do or how will it affect the other scripts because I am unable to find the fundamental differences between the two objects. The wiki does not explain a lot, so please tell some detailed information.\r\n\r\nI dug commit histories and saw \r\n\r\n> \"Starting to vectorize all things\"\r\n\r\n kind of commit messages when the `VMobject` class is added to the engine. What does it mean \"Vectorize\" in this context?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '64c960041b5b9dcb0aac50019268a3bdf69d9563', 'files': [{'path': 'manimlib/mobject/types/vectorized_mobject.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "a2779fe2f6c9ab29508676f21242b1c6b88e2f67", "is_iss": 0, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/5229", "iss_label": "documentation\nenhancement\nfix-me", "title": "[Documentation]: Micro-agents", "body": "**What problem or use case are you trying to solve?**\r\n\r\nCurrently in the `openhands/agenthub/codeact_agent` directory, we have an implementation of micro agents, but this is not documented.\r\n\r\nTo do so, we can:\r\n1. read the implementation of codeact agent\r\n2. read an example microagent in `openhands/agenthub/codeact_agent/micro/github.md`\r\n3. add documentation to `openhands/agenthub/codeact_agent/README.md`\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a2779fe2f6c9ab29508676f21242b1c6b88e2f67', 'files': [{'path': 'microagents/README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["microagents/README.md"], "test": [], "config": [], "asset": []}} -{"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "08a2dfb01af1aec6743f5e4c23507d63980726c0", "is_iss": 0, "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/635", "iss_label": "bug", "title": "Ollama support issue.", "body": "\r\n#### Describe the bug\r\n\r\nWhen trying to configure OpenDevin to run with Ollama there are requests that are being sent to the ollama server like this:\r\n\r\n\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/1931e068-0341-429b-8c4e-0dd2da36f54c)\r\n\r\n\r\nThe post request should look like this:\r\n`\"POST /chat/completions HTTP/1.1\"`\r\n\r\n\r\n\r\n#### Setup and configuration\r\n**Current version**:\r\n\r\n```bash\r\ncommit 5c640c99cafb3c718dad60f377f3a725a8bab1de (HEAD -> local-llm-flag, origin/main, origin/HEAD, main)\r\n```\r\n\r\n\r\n**My config.toml and environment vars** (be sure to redact API keys):\r\n```toml\r\nWORKSPACE_DIR=\"./workspace\"\r\nLLM_BASE_URL=\"http://localhost:8000\"\r\nLLM_MODEL=\"ollama/starcoder2:15b\"\r\nLLM_EMBEDDING_MODEL=\"ollama/starcoder2:15b\"\r\n```\r\n\r\n**My model and agent** (you can see these settings in the UI):\r\n* Model: ollama/starcoder2\r\n* Agent: MonologueAgent\r\n\r\n**Commands I ran to install and run OpenDevin**:\r\n```\r\ngit clone ...\r\nmake build\r\nmake start-backend\r\nmake start-frontend\r\n```\r\n\r\n**Steps to Reproduce**:\r\n1. In `opendevin/llm/llm.py` in `__init__` replace `self.model = model if model else DEFAULT_MODEL_NAME` with `self.model_name = DEFAULT_MODEL_NAME`\r\n2. Run your local model on litellm `litellm --model ollama/starcoder2:15b --port 8000`\r\n3. Run `make build` then `make start-backend` and `make start-frontend`\r\n4. Ask devin to do anything ex 'make a hello world script in python'\r\n5. Observe 404 errors spammed in litellm server log\r\n\r\n**Logs, error messages, and screenshots**:\r\nThis is a log from the backend server running from `make start-backend` steps 0-99 all look the same.\r\n```\r\n==============\r\nSTEP 99\r\n\r\nPLAN:\r\nplease make a simple flask app that says hello world.\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1436, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 31, in condense\r\n resp = llm.completion(messages=messages)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 328, in completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 325, in completion\r\n response = self.function_with_fallbacks(**kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1419, in function_with_fallbacks\r\n raise original_exception\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1344, in function_with_fallbacks\r\n response = self.function_with_retries(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1496, in function_with_retries\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1462, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nERROR:\r\nError condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1436, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 31, in condense\r\n resp = llm.completion(messages=messages)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 328, in completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 325, in completion\r\n response = self.function_with_fallbacks(**kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1419, in function_with_fallbacks\r\n raise original_exception\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1344, in function_with_fallbacks\r\n response = self.function_with_retries(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1496, in function_with_retries\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1462, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/opendevin/controller/agent_controller.py\", line 112, in step\r\n action = self.agent.step(self.state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/agent.py\", line 153, in step\r\n self._add_event(prev_action.to_dict())\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/agent.py\", line 96, in _add_event\r\n self.monologue.condense(self.llm)\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 36, in condense\r\n raise RuntimeError(f\"Error condensing thoughts: {e}\")\r\nRuntimeError: Error condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nOBSERVATION:\r\nError condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\nExited before finishing\r\n```\r\n\r\n#### Additional Context\r\n\r\nLitellm for local models is expecting api calls in the following format:\r\n\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/67b10c26-a9e6-44a1-a79e-908fc7d3749f)\r\n\r\nFrom: `http://localhost:8000/#/`\r\n\r\nI know that the problem is whatever is managing the api calls is set to call `/api/generate/` because this is the convention, but for local server that is not supported. I do not know where to look to fix this, any ideas?\r\n\r\nThe server responds when I test it like this:\r\n```\r\ndef query_local_llm(prompt, limit=TOKEN_LIMIT):\r\n # Replace with your actual server address and port\r\n url = \"http://0.0.0.0:8000/chat/completions\"\r\n payload = {\r\n \"model\": \"ollama/mistral\",\r\n \"messages\" : [{\"content\": prompt, \"role\": \"user\"}],\r\n \"max_tokens\": limit\r\n }\r\n response = requests.post(url, json=payload)\r\n```\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/b9bae877-5bd4-4864-b672-9678bb9a294e)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '08a2dfb01af1aec6743f5e4c23507d63980726c0', 'files': [{'path': 'opendevin/llm/LOCAL_LLM_GUIDE.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["opendevin/llm/LOCAL_LLM_GUIDE.md"], "test": [], "config": [], "asset": []}} -{"organization": "scrapy", "repo_name": "scrapy", "base_commit": "d636e5baa8a077e2869bfe3b76525efec42392ec", "is_iss": 0, "iss_html_url": "https://github.com/scrapy/scrapy/issues/2276", "iss_label": "", "title": "can LinkExtractor extract scrapy.link with node info", "body": "the html is like below, i want to extract the link `/example/category/pg{page}/`, but the `scrapy.link` does not contains the node info(`currentPage` and `totalPage`), how can i extract the link with the node info \n\n``` html\n
      \n
      \n
      \n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd636e5baa8a077e2869bfe3b76525efec42392ec', 'files': [{'path': 'scrapy/http/response/text.py', 'Loc': {\"('TextResponse', 'css', 117)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/http/response/text.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "scrapy", "repo_name": "scrapy", "base_commit": "892467cb8a40c54840284a08d0f98ab1b3af7bc4", "is_iss": 0, "iss_html_url": "https://github.com/scrapy/scrapy/issues/4565", "iss_label": "", "title": "AttributeError: module 'resource' has no attribute 'getrusage'", "body": "version : Scrapy 2.1.0\r\n\r\n```\r\n2020-05-11 20:05:28 [scrapy.core.engine] INFO: Spider opened\r\n2020-05-11 20:05:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)\r\n2020-05-11 20:05:28 [dy] INFO: Spider opened: dy\r\n2020-05-11 20:05:28 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\utils\\defer.py\", line 161, in maybeDeferred_coro\r\n result = f(*args, **kw)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\pydispatch\\robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 55, in engine_started\r\n self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 48, in get_virtual_size\r\n size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss\r\nAttributeError: module 'resource' has no attribute 'getrusage'\r\n```\r\n\r\n```\r\n2020-05-11 20:05:43 [scrapy.core.engine] INFO: Closing spider (finished)\r\n2020-05-11 20:05:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:\r\n{'downloader/request_bytes': 6751,\r\n 'downloader/request_count': 14,\r\n 'downloader/request_method_count/GET': 14,\r\n 'downloader/response_bytes': 12380415,\r\n 'downloader/response_count': 14,\r\n 'downloader/response_status_count/200': 10,\r\n 'downloader/response_status_count/302': 4,\r\n 'elapsed_time_seconds': 14.631021,\r\n 'finish_reason': 'finished',\r\n 'finish_time': datetime.datetime(2020, 5, 11, 12, 5, 43, 378200),\r\n 'item_scraped_count': 65,\r\n 'log_count/DEBUG': 85,\r\n 'log_count/ERROR': 1,\r\n 'log_count/INFO': 9,\r\n 'request_depth_max': 1,\r\n 'response_received_count': 10,\r\n 'scheduler/dequeued': 6,\r\n 'scheduler/dequeued/memory': 6,\r\n 'scheduler/enqueued': 6,\r\n 'scheduler/enqueued/memory': 6,\r\n 'start_time': datetime.datetime(2020, 5, 11, 12, 5, 28, 747179)}\r\n2020-05-11 20:05:43 [scrapy.core.engine] INFO: Spider closed (finished)\r\n2020-05-11 20:05:43 [scrapy.utils.signal] ERROR: Error caught on signal handler: >\r\nTraceback (most recent call last):\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\utils\\defer.py\", line 161, in maybeDeferred_coro\r\n result = f(*args, **kw)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\pydispatch\\robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 70, in engine_stopped\r\n for tsk in self.tasks:\r\nAttributeError: 'MemoryUsage' object has no attribute 'tasks'\r\n```\r\n\r\n(edited for text formatting)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '892467cb8a40c54840284a08d0f98ab1b3af7bc4', 'files': [{'path': 'scrapy/commands/settings.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/commands/settings.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "commaai", "repo_name": "openpilot", "base_commit": "ce9559cc54433244cb01d4781302eb072a3fd519", "is_iss": 0, "iss_html_url": "https://github.com/commaai/openpilot/issues/30078", "iss_label": "bug\nfingerprint\ncar\nford", "title": "2023 Ford Maverick Not Recognized", "body": "### Describe the bug\n\nCar Not Recognized\r\n\r\nLooks like all the values for firmware are the same as what is already in values.py\n\n### Which car does this affect?\n\nFord Maverick 2023\n\n### Provide a route where the issue occurs\n\n66833387c2bbbca0|2023-09-27--21-13-05\n\n### openpilot version\n\nmaster-ci\n\n### Additional info\n\n`{'carParams': {'alternativeExperience': 1,\r\n 'autoResumeSng': True,\r\n 'carFingerprint': 'mock',\r\n 'carFw': [{'address': 2016,\r\n 'brand': 'ford',\r\n 'bus': 1,\r\n 'ecu': 'engine',\r\n 'fwVersion': b'PZ6A-14C204-JE\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 2024,\r\n 'subAddress': 0},\r\n {'address': 1840,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'eps',\r\n 'fwVersion': b'NZ6C-14D003-AL\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1848,\r\n 'subAddress': 0},\r\n {'address': 1888,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'abs',\r\n 'fwVersion': b'PZ6C-2D053-ED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1896,\r\n 'subAddress': 0},\r\n {'address': 1798,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'fwdCamera',\r\n 'fwVersion': b'NZ6T-14F397-AC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1806,\r\n 'subAddress': 0},\r\n {'address': 1842,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'shiftByWire',\r\n 'fwVersion': b'NZ6P-14G395-AD\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1850,\r\n 'subAddress': 0},\r\n {'address': 1892,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'fwdRadar',\r\n 'fwVersion': b'NZ6T-14D049-AA\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1900,\r\n 'subAddress': 0},\r\n {'address': 2016,\r\n 'brand': 'mazda',\r\n 'bus': 1,\r\n 'ecu': 'engine',\r\n 'fwVersion': b'PZ6A-14C204-JE\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 2024,\r\n 'subAddress': 0},\r\n {'address': 1840,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'eps',\r\n 'fwVersion': b'NZ6C-14D003-AL\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1848,\r\n 'subAddress': 0},\r\n {'address': 1888,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'abs',\r\n 'fwVersion': b'PZ6C-2D053-ED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1896,\r\n 'subAddress': 0},\r\n {'address': 1798,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'fwdCamera',\r\n 'fwVersion': b'NZ6T-14F397-AC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1806,\r\n 'subAddress': 0},\r\n {'address': 1892,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'fwdRadar',\r\n 'fwVersion': b'NZ6T-14D049-AA\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1900,\r\n 'subAddress': 0}],\r\n 'carName': 'mock',\r\n 'carVin': '3FTTW8E31PRA79783',\r\n 'centerToFront': 1.350000023841858,\r\n 'communityFeatureDEPRECATED': False,\r\n 'dashcamOnly': False,\r\n 'directAccelControlDEPRECATED': False,\r\n 'enableApgsDEPRECATED': False,\r\n 'enableBsm': False,\r\n 'enableCameraDEPRECATED': False,\r\n 'enableDsu': False,\r\n 'enableGasInterceptor': False,\r\n 'experimentalLongitudinalAvailable': False,\r\n 'fingerprintSource': 'can',\r\n 'flags': 0,\r\n 'fuzzyFingerprint': False,\r\n 'hasStockCameraDEPRECATED': False,\r\n 'isPandaBlackDEPRECATED': False,\r\n 'lateralTuning': {'pid': {'kf': 0.0}},\r\n 'longitudinalActuatorDelayLowerBound': 0.15000000596046448,\r\n 'longitudinalActuatorDelayUpperBound': 0.15000000596046448,\r\n 'longitudinalTuning': {'deadzoneBP': [0.0], 'deadzoneV': [0.0], 'kf': 1.0, 'kiBP': [0.0], 'kiV': [1.0], 'kpBP': [0.0], 'kpV': [1.0]},\r\n 'mass': 1836.0,\r\n 'maxLateralAccel': 10.0,\r\n 'maxSteeringAngleDegDEPRECATED': 0.0,\r\n 'minEnableSpeed': -1.0,\r\n 'minSpeedCanDEPRECATED': 0.0,\r\n 'minSteerSpeed': 0.0,\r\n 'networkLocation': 'fwdCamera',\r\n 'notCar': False,\r\n 'openpilotLongitudinalControl': False,\r\n 'pcmCruise': True,\r\n 'radarTimeStep': 0.05000000074505806,\r\n 'radarUnavailable': False,\r\n 'rotationalInertia': 3139.534912109375,\r\n 'safetyConfigs': [{'safetyModel': 'noOutput', 'safetyParam': 0, 'safetyParam2DEPRECATED': 0, 'safetyParamDEPRECATED': 0}],\r\n 'safetyModelDEPRECATED': 'silent',\r\n 'safetyModelPassiveDEPRECATED': 'silent',\r\n 'safetyParamDEPRECATED': 0,\r\n 'startAccel': 0.0,\r\n 'startingAccelRateDEPRECATED': 0.0,\r\n 'startingState': False,\r\n 'steerActuatorDelay': 0.0,\r\n 'steerControlType': 'torque',\r\n 'steerLimitAlert': False,\r\n 'steerLimitTimer': 1.0,\r\n 'steerRateCostDEPRECATED': 0.0,\r\n 'steerRatio': 13.0,\r\n 'steerRatioRear': 0.0,\r\n 'stopAccel': -2.0,\r\n 'stoppingControl': True,\r\n 'stoppingDecelRate': 0.800000011920929,\r\n 'tireStiffnessFactor': 1.0,\r\n 'tireStiffnessFront': 201087.203125,\r\n 'tireStiffnessRear': 317877.90625,\r\n 'transmissionType': 'unknown',\r\n 'vEgoStarting': 0.5,\r\n 'vEgoStopping': 0.5,\r\n 'wheelSpeedFactor': 1.0,\r\n 'wheelbase': 2.700000047683716},\r\n 'logMonoTime': 971923210573,\r\n 'valid': True}\r\n{'carParams': {'alternativeExperience': 1,\r\n 'autoResumeSng': True,\r\n 'carFingerprint': 'mock',\r\n 'carFw': [{'address': 2016,\r\n 'brand': 'ford',\r\n 'bus': 1,\r\n 'ecu': 'engine',\r\n 'fwVersion': b'PZ6A-14C204-JE\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 2024,\r\n 'subAddress': 0},\r\n {'address': 1840,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'eps',\r\n 'fwVersion': b'NZ6C-14D003-AL\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1848,\r\n 'subAddress': 0},\r\n {'address': 1888,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'abs',\r\n 'fwVersion': b'PZ6C-2D053-ED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1896,\r\n 'subAddress': 0},\r\n {'address': 1798,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'fwdCamera',\r\n 'fwVersion': b'NZ6T-14F397-AC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1806,\r\n 'subAddress': 0},\r\n {'address': 1842,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'shiftByWire',\r\n 'fwVersion': b'NZ6P-14G395-AD\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1850,\r\n 'subAddress': 0},\r\n {'address': 1892,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'fwdRadar',\r\n 'fwVersion': b'NZ6T-14D049-AA\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1900,\r\n 'subAddress': 0},\r\n {'address': 2016,\r\n 'brand': 'mazda',\r\n 'bus': 1,\r\n 'ecu': 'engine',\r\n 'fwVersion': b'PZ6A-14C204-JE\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 2024,\r\n 'subAddress': 0},\r\n {'address': 1840,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'eps',\r\n 'fwVersion': b'NZ6C-14D003-AL\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1848,\r\n 'subAddress': 0},\r\n {'address': 1888,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'abs',\r\n 'fwVersion': b'PZ6C-2D053-ED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1896,\r\n 'subAddress': 0},\r\n {'address': 1798,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'fwdCamera',\r\n 'fwVersion': b'NZ6T-14F397-AC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1806,\r\n 'subAddress': 0},\r\n {'address': 1892,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'fwdRadar',\r\n 'fwVersion': b'NZ6T-14D049-AA\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1900,\r\n 'subAddress': 0}],\r\n 'carName': 'mock',\r\n 'carVin': '3FTTW8E31PRA79783',\r\n 'centerToFront': 1.350000023841858,\r\n 'communityFeatureDEPRECATED': False,\r\n 'dashcamOnly': False,\r\n 'directAccelControlDEPRECATED': False,\r\n 'enableApgsDEPRECATED': False,\r\n 'enableBsm': False,\r\n 'enableCameraDEPRECATED': False,\r\n 'enableDsu': False,\r\n 'enableGasInterceptor': False,\r\n 'experimentalLongitudinalAvailable': False,\r\n 'fingerprintSource': 'can',\r\n 'flags': 0,\r\n 'fuzzyFingerprint': False,\r\n 'hasStockCameraDEPRECATED': False,\r\n 'isPandaBlackDEPRECATED': False,\r\n 'lateralTuning': {'pid': {'kf': 0.0}},\r\n 'longitudinalActuatorDelayLowerBound': 0.15000000596046448,\r\n 'longitudinalActuatorDelayUpperBound': 0.15000000596046448,\r\n 'longitudinalTuning': {'deadzoneBP': [0.0], 'deadzoneV': [0.0], 'kf': 1.0, 'kiBP': [0.0], 'kiV': [1.0], 'kpBP': [0.0], 'kpV': [1.0]},\r\n 'mass': 1836.0,\r\n 'maxLateralAccel': 10.0,\r\n 'maxSteeringAngleDegDEPRECATED': 0.0,\r\n 'minEnableSpeed': -1.0,\r\n 'minSpeedCanDEPRECATED': 0.0,\r\n 'minSteerSpeed': 0.0,\r\n 'networkLocation': 'fwdCamera',\r\n 'notCar': False,\r\n 'openpilotLongitudinalControl': False,\r\n 'pcmCruise': True,\r\n 'radarTimeStep': 0.05000000074505806,\r\n 'radarUnavailable': False,\r\n 'rotationalInertia': 3139.534912109375,\r\n 'safetyConfigs': [{'safetyModel': 'noOutput', 'safetyParam': 0, 'safetyParam2DEPRECATED': 0, 'safetyParamDEPRECATED': 0}],\r\n 'safetyModelDEPRECATED': 'silent',\r\n 'safetyModelPassiveDEPRECATED': 'silent',\r\n 'safetyParamDEPRECATED': 0,\r\n 'startAccel': 0.0,\r\n 'startingAccelRateDEPRECATED': 0.0,\r\n 'startingState': False,\r\n 'steerActuatorDelay': 0.0,\r\n 'steerControlType': 'torque',\r\n 'steerLimitAlert': False,\r\n 'steerLimitTimer': 1.0,\r\n 'steerRateCostDEPRECATED': 0.0,\r\n 'steerRatio': 13.0,\r\n 'steerRatioRear': 0.0,\r\n 'stopAccel': -2.0,\r\n 'stoppingControl': True,\r\n 'stoppingDecelRate': 0.800000011920929,\r\n 'tireStiffnessFactor': 1.0,\r\n 'tireStiffnessFront': 201087.203125,\r\n 'tireStiffnessRear': 317877.90625,\r\n 'transmissionType': 'unknown',\r\n 'vEgoStarting': 0.5,\r\n 'vEgoStopping': 0.5,\r\n 'wheelSpeedFactor': 1.0,\r\n 'wheelbase': 2.700000047683716},\r\n 'logMonoTime': 1021914306894,\r\n 'valid': True}`", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ce9559cc54433244cb01d4781302eb072a3fd519', 'files': []}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "psf", "repo_name": "requests", "base_commit": "27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66", "is_iss": 0, "iss_html_url": "https://github.com/psf/requests/issues/775", "iss_label": "", "title": "Content marked as consumed in 0.13.6", "body": "Content is immediately marked as consumed in 0.13.6, causing calls to e.g. response.iter_content() to throw an error.\n\nTest code (tested with python 2.6):\n\n```\nimport requests\n\nr = requests.get('http://docs.python-requests.org/')\nif r._content_consumed:\n print 'consumed'\nelse:\n print 'not consumed'\n```\n\nIn 0.13.5 this prints:\nnot consumed\n\nIn 0.13.6 this prints:\nconsumed\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66', 'files': [{'path': 'requests/models.py', 'Loc': {\"('Request', '__init__', 47)\": {'mod': [62]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["requests/models.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "psf", "repo_name": "requests", "base_commit": "2de907ad778de270911acaffe93883f0e2729a4a", "is_iss": 1, "iss_html_url": "https://github.com/psf/requests/issues/4602", "iss_label": "", "title": "Chunk-encoded request doesn't recognize iter_content generator", "body": "Passing a generator created by iter_content() as request data raises \"TypeError: sendall() argument 1 must be string or buffer, not generator\".\r\n\r\n## Expected Result\r\n\r\nThe POST request successfully delives the content from the GET request.\r\n\r\n## Actual Result\r\n\r\nA TypeError is raised:\r\n```\r\nTraceback (most recent call last):\r\n File \"..\\test.py\", line 7, in \r\n PostForward(\"http://myhost/img/foo.png\", \"http://myotherhost/convert\")\r\n File \"..\\test.py\", line 6, in PostForward\r\n return requests.post(url=dst, data=data, headers={'Content-Length': length})\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\adapters.py\", line 440, in send\r\n timeout=timeout\r\n File \"C:\\Python27\\lib\\site-packages\\urllib3\\connectionpool.py\", line 601, in urlopen\r\n chunked=chunked)\r\n File \"C:\\Python27\\lib\\site-packages\\urllib3\\connectionpool.py\", line 357, in _make_request\r\n conn.request(method, url, **httplib_request_kw)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1042, in request\r\n self._send_request(method, url, body, headers)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1082, in _send_request\r\n self.endheaders(body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1038, in endheaders\r\n self._send_output(message_body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 886, in _send_output\r\n self.send(message_body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 858, in send\r\n self.sock.sendall(data)\r\n File \"C:\\Python27\\lib\\socket.py\", line 228, in meth\r\n return getattr(self._sock,name)(*args)\r\nTypeError: sendall() argument 1 must be string or buffer, not generator\r\n```\r\n\r\n## Reproduction Steps\r\n\r\n```python\r\nimport requests\r\ndef PostForward(src, dst):\r\n\twith requests.get(url=src, stream=True) as srcResponse:\r\n\t\tlength = srcResponse.headers['Content-Length']\r\n\t\tdata = srcResponse.iter_content(1024)\r\n\t\treturn requests.post(url=dst, data=data, headers={'Content-Length': length})\r\nPostForward(\"http://myhost/img/foo.png\", \"http://myotherhost/convert\")\r\n```\r\n\r\n## System Information\r\n\r\n $ python -m requests.help\r\n\r\n```\r\n{\r\n \"chardet\": {\r\n \"version\": \"3.0.4\"\r\n },\r\n \"cryptography\": {\r\n \"version\": \"\"\r\n },\r\n \"idna\": {\r\n \"version\": \"2.6\"\r\n },\r\n \"implementation\": {\r\n \"name\": \"CPython\",\r\n \"version\": \"2.7.14\"\r\n },\r\n \"platform\": {\r\n \"release\": \"10\",\r\n \"system\": \"Windows\"\r\n },\r\n \"pyOpenSSL\": {\r\n \"openssl_version\": \"\",\r\n \"version\": null\r\n },\r\n \"requests\": {\r\n \"version\": \"2.18.4\"\r\n },\r\n \"system_ssl\": {\r\n \"version\": \"100020bf\"\r\n },\r\n \"urllib3\": {\r\n \"version\": \"1.22\"\r\n },\r\n \"using_pyopenssl\": false\r\n}\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "requests"}, {"pro": "toolbelt", "path": ["requests_toolbelt/streaming_iterator.py"]}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["requests_toolbelt/streaming_iterator.py"], "doc": [], "test": [], "config": [], "asset": ["requests"]}} -{"organization": "psf", "repo_name": "requests", "base_commit": "f17ef753d2c1f4db0d7f5aec51261da1db20d611", "is_iss": 1, "iss_html_url": "https://github.com/psf/requests/issues/3031", "iss_label": "Needs Info\nQuestion/Not a bug", "title": "[WinError 10048] Only one usage of each socket address ...", "body": "I notice that despite using requests.Session() - I still seem to be creating new connections/sockets which eventually exhaust (TIME_WAIT) and I get the following error:\n\n> [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted',))\n\n```\ns = requests.Session()\ndata = zip(url_routes, cycle(s))\ncalc_routes = pool.map(processRequest, data)\n\n```\n\nI posted a bit more [here](http://stackoverflow.com/questions/35793908/python-multiprocessing-associate-a-process-with-a-session), however not sure how to address this\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [8], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "psf", "repo_name": "requests", "base_commit": "6f659a41794045292b836859f1281d33eeed8260", "is_iss": 0, "iss_html_url": "https://github.com/psf/requests/issues/3740", "iss_label": "", "title": "File download weirdness", "body": "I noticed this issue building conda recipes. Conda uses requests to download files from the internet.\r\n\r\nThe file that is being fetched is: https://dakota.sandia.gov/sites/default/files/distributions/public/dakota-6.5-public.src.tar.gz\r\n(link found here: https://dakota.sandia.gov/download.html)\r\n\r\nDownloading with curl -O\r\nfilesize: 78MB\r\nmd5: 02c46e904d40bba6b308065db34c1ad7\r\n\r\nDownloading with urllib2 (from the standard library):\r\nfilesize: 78MB\r\nmd5: 02c46e904d40bba6b308065db34c1ad7\r\n\r\nDownloading with requests-2.12.1 (supplied with conda)\r\nfilesize: 248MB\r\nmd5: 41e4268140d850756812510512d8eee8\r\ntar -tf doesn't indicate any corruption.\r\n\r\nI'm not sure what is different with this particular URL, but the other files I tried with requests worked. I don't know where the extra 170MB is coming from?\r\n\r\ncode used to download files:\r\n```python\r\ndef download_file(url, fn):\r\n r = requests.get(url, stream=True)\r\n with open(fn, 'wb') as f:\r\n for chunk in r.iter_content(chunk_size=1024): \r\n if chunk:\r\n f.write(chunk)\r\n\r\ndef download_urllib2(url, fn):\r\n f = urllib2.urlopen(url)\r\n with open(fn, 'wb') as fh:\r\n for x in iter(lambda: f.read(1024), b''):\r\n fh.write(x)\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6f659a41794045292b836859f1281d33eeed8260', 'files': [{'path': 'docs/user/quickstart.rst', 'Loc': {'(None, None, 166)': {'mod': [166]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/user/quickstart.rst"], "test": [], "config": [], "asset": []}} -{"organization": "psf", "repo_name": "requests", "base_commit": "62176a1ca7207db37273365b4691ed599203b828", "is_iss": 0, "iss_html_url": "https://github.com/psf/requests/issues/3849", "iss_label": "", "title": "Received response with content-encoding: gzip, but failed to decode it", "body": "```python\r\nimport requests\r\n\r\nrequests.get('http://gett.bike/')\r\n```\r\nThis code raises the following exception:\r\n```python\r\nContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.',\r\nerror('Error -3 while decompressing data: incorrect data check',))\r\n```\r\nArch linux x64\r\nrequests==2.13.0\r\npython=3.6.0", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '62176a1ca7207db37273365b4691ed599203b828', 'files': [{'path': 'src/requests/api.py', 'Loc': {\"(None, 'request', 14)\": {'mod': [24]}}, 'status': 'modified'}, {'Loc': [4], 'path': None}]}", "own_code_loc": [{"Loc": [4], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "src/requests/api.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "psf", "repo_name": "requests", "base_commit": "057722af23edf3f69bf7bdfed7c6c32cbe1ce2e7", "is_iss": 1, "iss_html_url": "https://github.com/psf/requests/issues/3015", "iss_label": "", "title": "Ability to set timeout after response", "body": "For devs who use this great library, it would be very beneficial to be able to set the timeout AFTER initial connection. There are a few scenarios where this is useful but one of the main patterns/use cases is this:\n\n```\n\nimport requests\nimport socket\n\n# May or may not subclass threading.Thread\nclass Getter(object):\n def __init__(self):\n self.request = requests.get(url, stream=True)\n\n def run(self):\n with open(path, 'r+b') as file:\n\n bytes_consumed = 0\n while True:\n try:\n\n chunk = self.request.raw.read(size)\n if not chunk:\n break\n chunk_length = len(chunk)\n\n file.write(chunk)\n bytes_consumed += chunk_length\n\n except socket.timeout:\n # handle incomplete download by using range header next time, etc.\n```\n\nHandling incomplete downloads due to connection loss is common and especially important when downloading large or many files (or both). As you can see, this can be achieved in a fairly straightforward way. The issue is there is really no good way to write tests for this. Each method would involve OS specific code which would also be a no-go for CI services.\n\nWhat would be an option is the ability to set the timeout after establishing a connection. This way in a test you could do \"r.timeout = (None, 0.00001)\" and during reading it would simulate a timeout.\n\nTo my knowledge this is no way currently to inject a new Timeout class retroactively. Is this correct?\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "psf", "repo_name": "requests", "base_commit": "1285f576ae0a848de27af10d917c19b60940d1fa", "is_iss": 1, "iss_html_url": "https://github.com/psf/requests/issues/3774", "iss_label": "", "title": "bad handshake error with ssl3", "body": "I have an inhouse IIS server with ssl3 but an expired certificate, so I used requests without certificate verification and it was working fine with requests 2.11.1. But after I upgrade requests to 2.12.0, there was an error occured. \r\n\r\nthe code is:\r\n...\r\nrequests.get('https://10.192.8.89:8080/yps_report', verify=False)\r\n...\r\n\r\nerror message:\r\nTraceback (most recent call last):\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\contrib\\pyopenssl.py\", line 417, in wrap_socket\r\n cnx.do_handshake()\r\n File \"c:\\python35\\lib\\site-packages\\OpenSSL\\SSL.py\", line 1426, in do_handshake\r\n self._raise_ssl_error(self._ssl, result)\r\n File \"c:\\python35\\lib\\site-packages\\OpenSSL\\SSL.py\", line 1167, in _raise_ssl_error\r\n raise SysCallError(-1, \"Unexpected EOF\")\r\nOpenSSL.SSL.SysCallError: (-1, 'Unexpected EOF')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 594, in urlopen\r\n chunked=chunked)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 350, in _make_request\r\n self._validate_conn(conn)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 835, in _validate_conn\r\n conn.connect()\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connection.py\", line 323, in connect\r\n ssl_context=context)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\util\\ssl_.py\", line 324, in ssl_wrap_socket\r\n return context.wrap_socket(sock, server_hostname=server_hostname)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\contrib\\pyopenssl.py\", line 424, in wrap_socket\r\n raise ssl.SSLError('bad handshake: %r' % e)\r\nssl.SSLError: (\"bad handshake: SysCallError(-1, 'Unexpected EOF')\",)\r\n...\r\n\r\nI tried to downgrade requests to 2.11.1 and the error was gone. I have no idea how to fix this.\r\nfrom requests.adapters import HTTPAdapter\nfrom requests.packages.urllib3.util.ssl_ import create_urllib3_context\n\n# This is the 2.11 Requests cipher string.\nCIPHERS = (\n 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'\n 'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:'\n '!eNULL:!MD5'\n)\n\nclass DESAdapter(HTTPAdapter):\n def init_poolmanager(self, *args, **kwargs):\n context = create_urllib3_context(ciphers=CIPHERS)\n kwargs['ssl_context'] = context\n return super(HTTPAdapter, self).init_poolmanager(*args, **kwargs)\n\ns = requests.Session()\ns.mount('https://10.192.8.89', DESAdapter())", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [41], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n\u9700\u8981\u5c06\u4e0b\u9762\u7684user\u7684\u4e00\u4e2acomment\u4e2duser\u7684\u4ee3\u7801\u653e\u5165\u5176\u4e2d", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "a6d4c3ff7cf43c24be6622102cee834fc5096496", "is_iss": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/78759", "iss_label": "module\nsupport:core\nbug\naffects_2.9", "title": "\"Invalid data passed to 'loop', it requires a list, got this instead: .", "body": "### Summary\r\n\r\nWhen trying to pass a variable called i.e. sysctl.values to loop, I will get the above error.\r\n\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ndebug (only used for debugging)\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\nansible 2.9.27\r\n config file = /home/rf/.ansible.cfg\r\n configured module search path = ['/home/rf/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.10/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.10.6 (main, Aug 2 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\n[I] -2-> ansible-config dump --only-changed\r\nANSIBLE_PIPELINING(/home/rf/.ansible.cfg) = True\r\nANSIBLE_SSH_ARGS(/home/rf/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s\r\nDEFAULT_FORKS(/home/rf/.ansible.cfg) = 50\r\nDEFAULT_HOST_LIST(/home/rf/.ansible.cfg) = ['/home/rf/hosts']\r\nINVENTORY_CACHE_ENABLED(/home/rf/.ansible.cfg) = True\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nFedora 36\r\n\r\n### Steps to Reproduce\r\n\r\n\r\n```yaml (paste below)\r\n- name: Test\r\n hosts: localhost\r\n gather_facts: True\r\n tasks:\r\n - debug:\r\n msg: \"{{ item }}\"\r\n loop: \"{{ sysctl2 }}\"\r\n - debug:\r\n msg: \"{{ item }}\"\r\n loop: \"{{ sysctl.values }}\"\r\n vars:\r\n sysctl:\r\n values:\r\n - { name: \"net.ipv4.ip_forward\", value: \"1\" }\r\n sysctl2:\r\n - { name: \"net.ipv4.ip_forward\", value: \"1\" }\r\n```\r\n\r\n\r\n\r\n\r\n### Expected Results\r\n\r\nOutput of debug using sysctl.values\r\n\r\n### Actual Results\r\n\r\n```console\r\nPLAY [Test] ********************************************************************************************************************************************************************************************\r\n\r\nTASK [Gathering Facts] *********************************************************************************************************************************************************************************\r\nok: [localhost]\r\n\r\nTASK [debug] *******************************************************************************************************************************************************************************************\r\nok: [localhost] => (item={'name': 'net.ipv4.ip_forward', 'value': '1'}) => {\r\n \"msg\": {\r\n \"name\": \"net.ipv4.ip_forward\",\r\n \"value\": \"1\"\r\n }\r\n}\r\n\r\nTASK [debug] *******************************************************************************************************************************************************************************************\r\nfatal: [localhost]: FAILED! => {\"msg\": \"Invalid data passed to 'loop', it requires a list, got this instead: . Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup.\"}\r\n\r\nPLAY RECAP *********************************************************************************************************************************************************************************************\r\nlocalhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0\r\n```\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [59], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "8af920c8924b2fd9a0e4192c3c7e6085b687bfdc", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/82382", "iss_label": "bug\naffects_2.16", "title": "Ansible core 2.16.1 broke AnsibleUnsafeBytes iteration", "body": "### Summary\r\n\r\nUpgrading form 2.16.0 to 2.16.1 (Ansible 9.0.1 to 9.1.0), iterating over AnsibleUnsafeBytes does not create a list of numbers anymore.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ncore, unsafe_proxy\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\n\r\n\r\nansible [core 2.16.1]\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.12/site-packages/ansible\r\n ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.12.0 (main, Nov 29 2023, 03:32:06) [GCC 10.2.1 20210110] (/usr/local/bin/python)\r\n jinja version = 3.1.2\r\n libyaml = True\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\n\r\n/bin/sh: 1: less: not found\r\n```\r\n\r\n(sorry, dockerized environment)\r\n\r\n\r\n### OS / Environment\r\n\r\nDebian bullseye / 11 (in python docker image: `python:3.12.0-bullseye`), ansible via pip (`ansible==9.1.0`)\r\n\r\n### Steps to Reproduce\r\n\r\n\r\n```py\r\nfrom ansible.utils.unsafe_proxy import AnsibleUnsafeText \r\nx = AnsibleUnsafeText(\"asdf\")\r\ny = x.encode(\"utf8\")\r\nlist(y)\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\n[97, 115, 100, 102]\r\n```\r\n\r\nThis is what happens on 2.16.0.\r\n\r\n### Actual Results\r\n\r\n```console\r\n[b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00']\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8af920c8924b2fd9a0e4192c3c7e6085b687bfdc', 'files': [{'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Other"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "bcf9cd1e2a01d8e111a28db157ebc255a5592dca", "is_iss": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/20085", "iss_label": "cloud\naffects_2.1\nmodule\ndocker\nbug", "title": "docker_container task fail on exit code", "body": "Unless i'm missing something i expect that if I were to do something like the following the task would fail? But it does not \ud83d\ude1f \r\n\r\n```yaml\r\n tasks:\r\n docker_container:\r\n name: \"exit-test\"\r\n image: \"ubuntu:latest\"\r\n command: \"bash -c 'exit 123'\"\r\n```\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\ndocker_container\r\n\r\n##### ANSIBLE VERSION\r\n```\r\n2.1.1.0\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### STEPS TO REPRODUCE\r\n```yaml\r\n tasks:\r\n docker_container:\r\n name: \"exit-test\"\r\n image: \"ubuntu:latest\"\r\n command: \"bash -c 'exit 123'\"\r\n```\r\n##### EXPECTED RESULTS\r\nShould fail the task\r\n\r\n##### ACTUAL RESULTS\r\nTask is ok.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ansible", "pro": "ansible-modules-core", "path": ["cloud/docker/docker_container.py"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["cloud/docker/docker_container.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "d5324c11a0c389d2ede8375e2024cb37b9eb8ce5", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/19352", "iss_label": "affects_2.0\nmodule\nsupport:core\nbug\nfiles", "title": "Template update convert \\n to actual new line", "body": "##### ISSUE TYPE\r\n\r\n Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\ntemplate\r\n\r\n##### ANSIBLE VERSION\r\n\r\n2.0 and higher\r\nCONFIGURATION\r\n```\r\n[ssh_connection]\r\ncontrol_path = %(directory)s/%%C\r\n```\r\n##### OS / ENVIRONMENT\r\n\r\nMac OS X 10.11.6\r\nCentos 6.x, 7.x\r\nSUMMARY\r\n\r\nIn the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing `(?m)\\n` . The output generated by the template module in versions 2.0 and later, treats the \\n as actual line break. Where as versions up to 1.9.6 retains the literal `(?m)\\n` without replacing the \\n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x.\r\n\r\nAny way we can work around this issue? Thank you for your help.\r\n##### STEPS TO REPRODUCE\r\n\r\nOur execution flow is probably not the nicest - we want to reengineer it soon. Basic steps:\r\n\r\n Run a shell script with ansible-playbook command that pass in an env variable with `(?m)\\n` literal.\r\n Playbook calls a main yaml file and assigns shell environment var to a included task yaml file.\r\n The task yaml file invokes the template module.\r\n\r\nIn the snippet below I stripped out other lines/vars for clarity.\r\n\r\nmain shell\r\n```\r\nset GROK_PATTERN_GENERAL_ERROR_PG=\"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n```\r\n```\r\nansible-playbook -i ../common/host.inventory \\\r\n -${VERBOSE} \\\r\n t.yml \\\r\n ${CHECK_ONLY} \\\r\n --extra-vars \"hosts='${HOST}'\r\n xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}'\r\n \"\r\n```\r\nt.yml\r\n```\r\n---\r\n- hosts: 127.0.0.1\r\n connection: local\r\n\r\n tasks:\r\n - include_vars: ../common/defaults/main.yml\r\n - name: generate logstash kafka logscan filter config file\r\n include: tasks/t.yml\r\n vars:\r\n logstash_grok_general_error: \"{{xlogstash_grok_general_error}}\"\r\n```\r\ntasks/t.yml\r\n```\r\n---\r\n - name: generate logstash kafka logscan filter config file\r\n template: src=../common/templates/my.conf.j2\r\n dest=\"./500-filter.conf\"\r\n```\r\nmy.conf.j2\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"{{logstash_grok_general_error}}\"\r\n ]\r\n }\r\n```\r\nNote the `(?m)\\n` are still on the same line.\r\n##### EXPECTED RESULTS\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```\r\n##### ACTUAL RESULTS\r\n\r\nNote `(?m)\\n` now has the `\\n` as actual line break.\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\r\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd5324c11a0c389d2ede8375e2024cb37b9eb8ce5', 'files': [{'path': 'lib/ansible/template/__init__.py', 'Loc': {}}, {'path': 't.yml', 'Loc': [60]}]}", "own_code_loc": [{"path": "t.yml", "Loc": [60]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/template/__init__.py"], "doc": [], "test": [], "config": ["t.yml"], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7", "is_iss": 0, "iss_html_url": "https://github.com/ansible/ansible/issues/73922", "iss_label": "python3\nmodule\nsupport:core\nbug\naffects_2.10", "title": "cron: Remove/delete an environment variable", "body": "### Summary\r\n\r\nWith `env=yes`, `cron` add environment variable (with the `name` & `value`) parameters.\r\nI though that having `env` + `state=absent` would remove said variable, but that's not the case (the cron file is actually removed).\r\nAs such there is no way to remove a variable and the more obvious way to attempt to do it results in a surprising result.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\nansible.builtin.cron\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\nansible 2.10.5\r\n config file = /home/user/.ansible.cfg\r\n configured module search path = ['/usr/share/ansible']\r\n ansible python module location = /home/user/.local/lib/python3.8/site-packages/ansible\r\n executable location = /home/user/.local/bin/ansible\r\n python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]\r\n\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nUbuntu 20.04\r\n\r\n### Steps to Reproduce\r\n\r\n```yaml\r\n cron:\r\n cron_file: foobar\r\n user: root\r\n env: yes\r\n name: \"VAR\"\r\n value: \"False\"\r\n state: absent\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nThe \"VAR\" variable is removed from /etc/cron.d/foobar\r\n\r\n### Actual Results\r\n\r\n/etc/cron.d/foobar is removed.\r\nThere is no way to remove the \"VAR\" variable.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7', 'files': [{'path': 'lib/ansible/modules/cron.py', 'Loc': {'(None, None, None)': {'mod': [15]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["lib/ansible/modules/cron.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "7490044bbe28029afa9e3099d86eae9fda5f88b7", "is_iss": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/11351", "iss_label": "affects_2.0\naffects_2.3\nc:executor/playbook_executor\nsupport:core\nfeature\nP3", "title": "enable do/until with async tasks", "body": "##### ISSUE TYPE\nFeature Idea\n\n##### COMPONENT NAME\ncore\n\n##### ANSIBLE VERSION\n2.0\n\n##### CONFIGURATION\n\n\n##### OS / ENVIRONMENT\n\n\n##### SUMMARY\nWhen a task is marked as async, there is no way to loop until a condition is met.\nWith poll:0 and async_status you can poll for async task to complete but you cannot repeat the original async task itself until a condition is met.\n\n```\ncat /tmp/async-test.yml \n\n---\n# Run through the test of an async command\n\n- hosts: all\n tasks:\n - name: \"Check an async command\"\n command: /bin/sleep 3\n async: 5\n poll: 1\n register: command_result\n until: command_result.failed\n retries: 5\n delay: 10\n```\n\n```\n$ansible-playbook -i localhost, /tmp/async-test.yml \n ____________\n< PLAY [all] >\n ------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\n _________________\n< GATHERING FACTS >\n -----------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\nok: [localhost]\n ______________________________\n< TASK: Check an async command >\n ------------------------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\nfatal: [localhost] => error while evaluating conditional: command_result.failed: {% if command_result.failed %} True {% else %} False {% endif %}\n\nFATAL: all hosts have already failed -- aborting\n ____________\n< PLAY RECAP >\n ------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\n to retry, use: --limit @/opt/ashishkh/async-test.retry\n\nlocalhost : ok=1 changed=0 unreachable=2 failed=0 \n```\n\n\n##### STEPS TO REPRODUCE\n\n\n##### EXPECTED RESULTS\n\n\n##### ACTUAL RESULTS\n\n\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"path": "/tmp/async-test.yml", "Loc": [33]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["/tmp/async-test.yml"], "asset": []}} -{"organization": "ansible", "repo_name": "ansible", "base_commit": "833970483100bfe89123a5718606234115921aec", "is_iss": 1, "iss_html_url": "https://github.com/ansible/ansible/issues/67993", "iss_label": "cloud\naws\nopenstack\nmodule\nsupport:community\naffects_2.5\nbug\ntraceback\nsystem", "title": "Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol(unable to disable stickiness not supported in NLB)", "body": "##### SUMMARY\r\nWe are using Ansible 2.5 to deploy AWS resources in our environment. From March 02, 2019 our deployment is failing with the below error.\r\n\r\nERROR:\r\n=====\r\nTASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:\r\nAn error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation: \r\nStickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\r\n17:21:08 fatal: [localhost]: FAILED! => {\"changed\": false, \"error\": {\"code\": \"InvalidConfigurationRequest\", \"message\": \"Stickiness type 'lb_cookie'\r\nis not supported for target groups with the TCP protocol\", \"type\": \"Sender\"}, \"msg\": \"An error occurred (InvalidConfigurationRequest) \r\nwhen calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\", \r\n\"response_metadata\": {\"http_headers\": {\"connection\": \"close\", \"content-length\": \"359\", \"content-type\": \"text/xml\", \"date\": \"Tue, 03 Mar 2020 11:51:08 GMT\", \r\n\"x-amzn-requestid\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\"}, \"http_status_code\": 400, \"request_id\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\", \"retry_attempts\": 0}}\r\n\r\n##### ISSUE TYPE\r\n- Bug Report - Unable to disable stickiness not supported in NLB\r\n\r\n##### COMPONENT NAME\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n\r\n##### ANSIBLE VERSION\r\n```paste below\r\nAnsible version = 2.5.0\r\n```\r\n\r\n##### CONFIGURATION\r\n\r\n```paste below\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 18.04 LTS / AWS environment\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\nKindly use the below playbook to deploy loadbalancer using Ansible on AWS cloud.\r\n\r\n\r\n```yaml\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n```\r\n\r\n\r\n\r\n##### EXPECTED RESULTS\r\nAn AWS Network loadbalancer will be created.\r\n\r\n\r\n##### ACTUAL RESULTS\r\nThe deployment fails with below error.\r\n\r\n\r\n```paste below\r\n TASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***\r\n17:21:08 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:\r\nAn error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation: \r\nStickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\r\n17:21:08 fatal: [localhost]: FAILED! => {\"changed\": false, \"error\": {\"code\": \"InvalidConfigurationRequest\", \"message\": \"Stickiness type 'lb_cookie'\r\nis not supported for target groups with the TCP protocol\", \"type\": \"Sender\"}, \"msg\": \"An error occurred (InvalidConfigurationRequest) \r\nwhen calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\", \r\n\"response_metadata\": {\"http_headers\": {\"connection\": \"close\", \"content-length\": \"359\", \"content-type\": \"text/xml\", \"date\": \"Tue, 03 Mar 2020 11:51:08 GMT\", \r\n\"x-amzn-requestid\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\"}, \"http_status_code\": 400, \"request_id\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\", \"retry_attempts\": 0}}\r\n\r\n```\r\n\r\n##### References\r\nI can see a similar issue occurred for terraform users as well.\r\n\r\nhttps://github.com/terraform-providers/terraform-provider-aws/issues/10494\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "6f718cee740e7cd423edd1136db78c5be49fa7c0", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2467", "iss_label": "question\nStale", "title": "Problems with weights", "body": "## \u2754Question\r\nHello, I have just run trainy.py script with my data and faced a problem - you wrote that weights are saved in runs directory, but in my case I have not found them. Everything is fine with hyp.yaml and opt.yaml but folder \"weights\" is empty. \r\nDo you have any guesses about this issue? \r\n\r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6f718cee740e7cd423edd1136db78c5be49fa7c0', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [470, 454]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nweights\u627e\u4e0d\u89c1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "06831aa9e905e0fa703958f6b3f3db443cf477f3", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/9079", "iss_label": "", "title": "Does adjusting the number of classes of a pretrained model work?", "body": "### Search before asking\r\n\r\n- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. *\r\n\r\n### Question\r\n\r\nHi everyone,\r\n\r\nI'm a bit confused about how to properly load a pretrained model with an adjusted number of classes for training with a custom dataset.\r\n\r\nOn the [Load YOLOv5 from PyTorch Hub \u2b50](https://github.com/ultralytics/yolov5/issues/36) page you've explained that one can adjust the number of classes in the pretrained model by using the following command. `model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)`\r\n\r\n\"Bildschirmfoto\r\n\r\nWhen I do so, I can see that a model.yaml file is overwritten, but I do not know where this file is stored. \r\n\r\nNow, what actually confuses me about the number of classes, is that when I try to use this pretrained model in detection, without any further training. I see an error, that the model was trained with nc=80 and my data is incompatible with nc=13:\r\n\r\n`AssertionError: ['yolov5s6.pt'] (80 classes) trained on different --data than what you passed (13 classes). Pass correct combination of --weights and --data that are trained together.`\r\n\r\nI know that I can not expect any proper predictions since the last layers are initialized with random weights, but I was expecting that the model is compatible with the 13 classes dataset.\r\n\r\nIs this behavior to be expected or am I doing something wrong here? \r\nDo I need to find and use the model.yaml file and is the only thing changed in there 'nc=13'?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '06831aa9e905e0fa703958f6b3f3db443cf477f3', 'files': [{'path': 'train.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "ee8988b8a2ed07af1b7c8807d39aad35369f0e28", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/8", "iss_label": "Stale", "title": "training actually can not work", "body": "After trained on several epochs, I found the mAP is still very low. Does the training really works?\r\n\r\n```\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 14/299 6.4G 0.02273 0.002925 0.0003764 0.02603 11 640: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [54:20<00:00, 2.13it/s]\r\n Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [13:37<00:00, 8.51it/s]\r\n all 5.57e+04 1.74e+05 0.000332 0.00039 2.4e-06 8.59e-07\r\n\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 15/299 6.4G 0.02232 0.002874 0.000371 0.02556 7 640: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [54:36<00:00, 2.12it/s]\r\n Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [14:23<00:00, 8.06it/s]\r\n all 5.57e+04 1.74e+05 0.000342 0.000401 2.44e-06 8.66e-07\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ee8988b8a2ed07af1b7c8807d39aad35369f0e28', 'files': [{'path': 'models/yolov5s.yaml', 'Loc': {'(None, None, 2)': {'mod': [2]}}, 'status': 'modified'}, {'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": ["models/yolov5s.yaml"], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "901243c7806be07b31073440cf721e73532a0734", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/894", "iss_label": "question", "title": "training stuck when loading dataset", "body": "## \u2754Question\r\nI follow the instructions to run coco128, \r\n```\r\npython train.py --img 640 --batch 16 --epochs 5 --data ./data/coco128.yaml --cfg ./models/yolov5s.yaml --weights '',\r\n```\r\nthe ouput is \r\n```\r\nImage sizes 640 train, 640 test\r\nUsing 8 dataloader workers\r\nStarting training for 5 epochs...\r\n\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 0%| | 0/8 [00:00 16min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 128 ->16min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 192 ->20min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 192 --workers 16->16min/epoch\r\n![sendpix7](https://user-images.githubusercontent.com/39581901/127759471-a110c68f-d1d4-4580-afd2-ae8c8a17ef4a.jpg)\r\nMy question\r\n1. Why I increased the batch size but the time required for training did not decrease\r\n2. The relationship between workers and batch size, because I noticed that you seem to set it to a maximum of 8 in the code (why it is 8),\r\n3. When epoch=0 and 1, the GPU memory has changed, about x1.5? What may be the reason for this,", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b74929c910f9cd99d2ece587e57bce1ae000d3ba', 'files': [{'path': 'train.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "404749a33cc29d119f54b2ce35bf3b33a847a487", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2186", "iss_label": "question", "title": "Can we return objectness score and class score?", "body": "## \u2754Question\r\nI am wondering if it is possible to return confidence scores for objectness and classification separately for each predicted box during inference? I might be conceptually off base here, but I am interested in understanding if the model is unsure if the box itself is correct or if the class it is assigning to the box is correct. My understanding is the `conf` that is returned now is a combo of the two? ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '404749a33cc29d119f54b2ce35bf3b33a847a487', 'files': [{'path': 'detect.py', 'Loc': {\"(None, 'detect', 18)\": {'mod': [103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113]}}, 'status': 'modified'}, {'path': 'utils/general.py', 'Loc': {\"(None, 'non_max_suppression', 340)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/general.py", "detect.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "dabad5793a638cba1e5a2bbb878c9b87fe1a14a0", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/3942", "iss_label": "enhancement\nStale", "title": "For online cutting training and detection can be improve", "body": "## \ud83d\ude80 Feature\r\n\r\nFor big image training, usually people thinking about to cut the images, but yolov5 can only resize the image to small size. Such as VisDrone dataset, the smallest image can have 960*540 size, if resize to 640*640, size would be 640*360, but the target in dataset mostly are small object, resize the image make the target become more smaller, but if use bigger resolution, the cuda memory would exceed.\r\n\r\nSo I thought online cutting training and detection would be a good feature for yolov5 to improve, although cutting image would also increase the train time, but it would be a great idea for people who don't have large computing power GPU, also I think cutting image would be effective for small object detection. Although it's not a new idea in detection, it would be a useful way for people to their own detector.\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dabad5793a638cba1e5a2bbb878c9b87fe1a14a0', 'files': [{'path': 'utils/augmentations.py', 'Loc': {\"('Albumentations', '__init__', 16)\": {'mod': [22]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/augmentations.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "c8c5ef36c9a19c7843993ee8d51aebb685467eca", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1238", "iss_label": "question", "title": "img-weights", "body": "## \u2754Question\r\nparser.add_argument('--img-weights', action='store_true', help='use weighted image selection for training')\r\nin order to make --iimg-weights work, what else I need to do? \r\ndataset = LoadImagesAndLabels(path, imgsz, batch_size,\r\n augment=augment, # augment images\r\n hyp=hyp, # augmentation hyperparameters\r\n rect=rect, # rectangular training\r\n cache_images=cache,\r\n single_cls=opt.single_cls,\r\n stride=int(stride),\r\n pad=pad),\r\n should I add an extra param image_weights=True??\r\n\r\n \r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c8c5ef36c9a19c7843993ee8d51aebb685467eca', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [397]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/7072", "iss_label": "question", "title": "why can't I reproduce the mAP provided by README.md\uff08v6.1\uff09\uff1f", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nI used the method recommended by README.md(v6.1) to reproduce the mAP, but I failed. \r\n'python train.py --data coco.yaml --cfg yolov5s.yaml --weights ' ' --hyp hyp.scratch-low.yaml --img 640 --batch-size 64 --epochs 300' .\r\nAll is default value,then I got the best mAP\uff08yolov5s\uff09 is 37.057%(the best mAP verified at the end of each epoch, 5000 images), it still has a gap of 0.4% mAP(37.4%). \r\nSimilarly, I reproduced the mAP\uff08yolov5n\uff09\uff0c27.586%----28.0%\uff0cNever get published results.\r\nMy GPU is GTX NVIDIA RTX A4000\uff0816116MiB\uff09, and I think it may be enough.\r\n\r\nIs this a normal error caused by equipment\uff08GPU) differences, or are there other reasons\uff1f\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31', 'files': [{'path': 'data/scripts/get_coco.sh', 'Loc': {'(None, None, 13)': {'mod': [13]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["data/scripts/get_coco.sh"]}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "079b36d72ba2ef298f7ae4dc283d8c7975eb02f6", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/6540", "iss_label": "question", "title": "Is YOLOv5 able to detect a specific number of classes according to the project's need, like just 2 or 3 classes?", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nHi, I'm using YOLOv5 in my project and I have a question. If I use \"--classes \" it could detect one type of class, but is there anyway that I can detect more than one type, like 2 or 3 different types? I've already tried \"-- classes 0 1\" or \"-- classes [0] [1]\" but without success. Thanks for the help!\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '079b36d72ba2ef298f7ae4dc283d8c7975eb02f6', 'files': [{'path': 'detect.py', 'Loc': {\"(None, 'parse_opt', 216)\": {'mod': [231]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["detect.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "e96c74b5a1c4a27934c5d8ad52cde778af248ed8", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4357", "iss_label": "question\nStale", "title": "Average Precision for each class", "body": "## Is there any way to see the average precision for each class?\r\n\r\nI have run my model for 1,000 epochs and I have a bunch of metrics (which are AMAZING by the way, thanks so making it so easy to see them!) and I have mAP, but I was wondering if there was a way to see the AP for each class? Like a table or something. \r\n\r\nIn addition, is it possible to see the precision-recall graphs for each class? I can see something in the images tab on wandb, but as I have 80 classes, it looks very messy. ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e96c74b5a1c4a27934c5d8ad52cde778af248ed8', 'files': [{'path': 'val.py', 'Loc': {\"(None, 'parse_opt', 293)\": {'mod': [305]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["val.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "96e36a7c913e2433446ff410a4cf60041010a524", "is_iss": 0, "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4152", "iss_label": "question", "title": "Format of data for testing trained model", "body": "In what format do I need to feed the validation dataset to the val.py file? Should images and markup be in the same folder or in different ones? In what format should the coordinates of the bounding boxes be in - yolo or something else?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '96e36a7c913e2433446ff410a4cf60041010a524', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "eaf5ec4467795344e7d9601515b017fd8c46e44b", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/439", "iss_label": "", "title": " decoding error in preprocessing synthesizer", "body": "I get the following error while running `synthesizer_preprocess_audio.py`.\r\n\r\n```\r\nArguments:\r\n datasets_root: /home/amin/voice_cloning/libri_100\r\n out_dir: /home/amin/voice_cloning/libri_100/SV2TTS/synthesizer\r\n n_processes: None\r\n skip_existing: True\r\n hparams: \r\n\r\nUsing data from:\r\n /home/amin/voice_cloning/libri_100/LibriSpeech/train-clean-100\r\nLibriSpeech: 0%| | 0/502 [00:00\r\n alignments = [line.rstrip().split(\" \") for line in alignments_file]\r\n File \"/usr/lib/python3.6/codecs.py\", line 321, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 37: invalid start byte\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"synthesizer_preprocess_audio.py\", line 52, in \r\n preprocess_librispeech(**vars(args)) \r\n File \"/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py\", line 36, in preprocess_librispeech\r\n for speaker_metadata in tqdm(job, \"LibriSpeech\", len(speaker_dirs), unit=\"speakers\"):\r\n File \"/home/amin/.local/lib/python3.6/site-packages/tqdm/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"/usr/lib/python3.6/multiprocessing/pool.py\", line 735, in next\r\n raise value\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 37: invalid start byte\r\n```\r\n\r\nCan anyone help? It can save a lot of time for me.\r\nThanks.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'eaf5ec4467795344e7d9601515b017fd8c46e44b', 'files': [{'path': 'synthesizer/preprocess.py', 'Loc': {\"(None, 'preprocess_speaker', 54)\": {'mod': [60]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/629", "iss_label": "", "title": "Error in macOS when trying to launch the toolbox", "body": "Traceback (most recent call last):\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/demo_toolbox.py\", line 2, in \r\n from toolbox import Toolbox\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/__init__.py\", line 1, in \r\n from toolbox.ui import UI\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/ui.py\", line 6, in \r\n from encoder.inference import plot_embedding_as_heatmap\r\nModuleNotFoundError: No module named 'encoder.inference'", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'encoder/inference.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["encoder/inference.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1156", "iss_label": "", "title": "missing SV2TTS/", "body": "Hey, I'm trying to finetune the pretrained model but it looks like I am missing the SV2TTS/ directory which contains train.txt, etc.\r\nI have a saved_models/ directory which has three *.pt files for the three components of this TTS architecture.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5', 'files': [{'path': 'synthesizer_preprocess_audio.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer_preprocess_audio.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "e32cf8f4ddb63d9a7603eeb31f1855b54926aee6", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/549", "iss_label": "", "title": "Import Error", "body": "Hey, i am trying to run this code and everytime i run demo_toolbox.py there comes an error \"failed to load qt binding\" i tried reinstalling matplotlib and also tried installing PYQt5 . \r\n\r\nNeed Help !!!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e32cf8f4ddb63d9a7603eeb31f1855b54926aee6', 'files': [{'path': 'toolbox/ui.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/117", "iss_label": "", "title": "ModuleNotFoundError: No module named 'tensorflow.contrib.seq2seq'", "body": "When running demo_cli.py\r\n\r\nPython = 3.7.4\r\nTensorFlow = 2.0 RC\r\nCUDA = 10.1\r\ncuDNN = Installed for right CUDA version\r\nWindows = 10", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/275", "iss_label": "", "title": "Speaker verification implementation", "body": "I need just the speaker verification part which is the implementation of [GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION](https://arxiv.org/pdf/1710.10467.pdf) paper, how I can proceed to get it please?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'encoder/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5\n\u8be2\u95ee\u529f\u80fd\u5b9e\u73b0\u6240\u5728\u5730", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["encoder/"]}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "7432046efc23cabf176f9fdc8d2fd67020059478", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/855", "iss_label": "", "title": "Output audio spectrum - low frequences", "body": "Hi, Im am trying to train new model in polish language but after 476k steps output sound is very \"robotic\". I was trying to find why that's happened and noticed (based on my output and @blue-fish samples: https://blue-fish.github.io/experiments/RTVC-FT-1.html) that spectrum of this model don't include high frequences compared to google. Both in logarithmic scale. \r\n\r\nOur output: \r\n\"Zrzut\r\n \r\nGoogle: (take a note its logarithmic scale)\r\n\"Zrzut\r\n\r\nDo you have any idea how to improve this?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7432046efc23cabf176f9fdc8d2fd67020059478', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [77]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1122", "iss_label": "", "title": "Requirements.txt failed to install with obscure issue with installing audioread", "body": "I ran into a few issues along the way that I was able to solve, namely errors like this:\r\n\r\n WARNING: Failed to write executable - trying to use .deleteme logic\r\n ERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified: \r\n 'C:\\\\Python310\\\\Scripts\\\\f2py.exe' -> 'C:\\\\Python310\\\\Scripts\\\\f2py.exe.deleteme'\r\n\r\nI fixed these by adding `--user` to the pip command.\r\n\r\nI also had to change requirements.txt to a newer version of numpy (1.22.1) to prevent it from failing to install due to older versions not being compatible with the version of Python I already have installed (3.10.6)\r\n\r\nBut now I'm stuck on this one:\r\n\r\n Requirement already satisfied: jsonpointer>=1.9 in c:\\users\\michael\\appdata\\roaming\\python\\python310\\site-packages (from jsonpatch->visdom==0.1.8.9->-r R:\\requirements.txt (line 15)) (2.3)\r\n Using legacy 'setup.py install' for umap-learn, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for visdom, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for audioread, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for pynndescent, since package 'wheel' is not installed.\r\n Installing collected packages: audioread, visdom, SoundFile, sounddevice, scikit-learn, resampy, pooch, matplotlib, pynndescent, librosa, umap-learn\r\n Running setup.py install for audioread ... error\r\n error: subprocess-exited-with-error\r\n \r\n \u00d7 Running setup.py install for audioread did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> [40 lines of output]\r\n C:\\Users\\michael\\AppData\\Local\\Temp\\pip-install-nat_itg2\\audioread_fa5fbfcd88364fc89c7b2a9e454b5549\\setup.py:17: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses\r\n import imp\r\n running install\r\n C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\command\\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.\r\n warnings.warn(\r\n Traceback (most recent call last):\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\util.py\", line 258, in subst_vars\r\n return _subst_compat(s).format_map(lookup)\r\n KeyError: 'py_version_nodot_plat'\r\n \r\n During handling of the above exception, another exception occurred:\r\n \r\n Traceback (most recent call last):\r\n File \"\", line 2, in \r\n File \"\", line 34, in \r\n File \"C:\\Users\\michael\\AppData\\Local\\Temp\\pip-install-nat_itg2\\audioread_fa5fbfcd88364fc89c7b2a9e454b5549\\setup.py\", line 27, in \r\n setup(name='audioread',\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\__init__.py\", line 153, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\core.py\", line 148, in setup\r\n return run_commands(dist)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\core.py\", line 163, in run_commands\r\n dist.run_commands()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\dist.py\", line 967, in run_commands\r\n self.run_command(cmd)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\dist.py\", line 985, in run_command\r\n cmd_obj.ensure_finalized()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\cmd.py\", line 107, in ensure_finalized\r\n self.finalize_options()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\command\\install.py\", line 45, in finalize_options\r\n orig.install.finalize_options(self)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 381, in finalize_options\r\n self.expand_dirs()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 563, in expand_dirs\r\n self._expand_attrs(['install_purelib', 'install_platlib',\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 553, in _expand_attrs\r\n val = subst_vars(val, self.config_vars)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\util.py\", line 260, in subst_vars\r\n raise ValueError(f\"invalid variable {var}\")\r\n ValueError: invalid variable 'py_version_nodot_plat'\r\n [end of output]\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\n error: legacy-install-failure\r\n \r\n \u00d7 Encountered error while trying to install package.\r\n \u2570\u2500> audioread\r\n \r\n note: This is an issue with the package mentioned above, not pip.\r\n hint: See above for output from the failure.\r\n\r\nI'm not sure if the issue is due to \"setup.py install\" being deprecated; if that's the case I have no idea what the fix is because I think this is being required somewhere else - maybe another package needs a newer version? But I have no idea which one.\r\n\r\nI also thought maybe it could be that wheel wasn't installed, `since package 'wheel' is not installed.` but when I try to install it, it says it's already installed:\r\n\r\n C:\\> pip install wheel --user\r\n\r\n Requirement already satisfied: wheel in c:\\python310\\lib\\site-packages (0.37.1)\r\n\r\nThere's also the invalid variable error, but I have no idea what this is talking about.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "95adc699c1deb637f485e85a5107d40da0ad94fc", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/717", "iss_label": "", "title": "I can't use Dataset/Speaker/Utterance", "body": "I can't use the upper section in the software. when loading it shows:\r\nWarning: you did not pass a root directory for datasets as argument.\r\nHow can I fix this?\r\nThank you\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '95adc699c1deb637f485e85a5107d40da0ad94fc', 'files': [{'path': 'demo_toolbox.py', 'Loc': {'(None, None, None)': {'mod': [15]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nwarning", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["demo_toolbox.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "039f7e5402e6d9da7fad5022dae038cdfb507b39", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/13", "iss_label": "", "title": "problem with utils.argutils in python 3.6", "body": "Hi under win 10 64 bits trying using python 3.6 it failed to import the print_args wiht the fact that he can't find the argutils.\r\nthink i have a relative import error but can't solve it\r\n\r\nbtw nice job on what i heard on the youtube demo\r\nif i mnaully try to import the utils from the root dir seems he load another utils files \r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '039f7e5402e6d9da7fad5022dae038cdfb507b39', 'files': [{'path': 'synthesizer/__init__.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "7432046efc23cabf176f9fdc8d2fd67020059478", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/884", "iss_label": "", "title": "Using a different speaker encoder", "body": "Hello, I really appreciate the work on display here. I was just wondering if I could use a different speaker encoder. If someone used a different encoder, could you explain the difficulties of replacing the encoder and how the results were different from the speaker encoder already in use?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7432046efc23cabf176f9fdc8d2fd67020059478', 'files': [{'path': 'toolbox/__init__.py', 'Loc': {\"('Toolbox', 'add_real_utterance', 182)\": {'mod': [191]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "a32962bb7b4827660646ac6dabf62309aea08a91", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/488", "iss_label": "", "title": "preprocessing VoxCele2 is not working", "body": "While running encoder_preprocess on voxceleb2 dataset, I'm getting the following warning and nothing else happens. Not sure why?\r\n\r\n\r\n```\r\nraw: Preprocessing data for 5994 speakers.\r\nraw: 0%| | 0/5994 [00:00= high```\r\n\r\nIt occurs in line 61 of vocoder_dataset.py, \r\n```mel_offsets = [np.random.randint(0, offset) for offset in max_offsets]```\r\nSo I assume there is something wrong with the value of offset? e.g. offset=0 so np.random.randint could not generate a number [0, 0)?\r\nDid anyone encountered this problem too?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0713f860a3dd41afb56e83cff84dbdf589d5e11a', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [88]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/651", "iss_label": "", "title": "Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc", "body": "hello. \r\nPlease help me, I do not know how to solve my problem problem. \r\nI run and completed without errors \r\n`python synthesizer_preprocess_audio.py `\r\n`python synthesizer_preprocess_embeds.py /SV2TTS/synthesizer`\r\nbut after typing `python synthesizer_train.py my_run /SV2TTS/synthesizer` \r\nshows me a long error\r\n\r\n\r\n```\r\nArguments:\r\n name: my_run\r\n synthesizer_root: C:\\Users\\matve\\Documents\\Tacotron\\datasets\\SV2TTS\\synthesizer\r\n models_dir: synthesizer/saved_models/\r\n mode: synthesis\r\n GTA: True\r\n restore: True\r\n summary_interval: 2500\r\n embedding_interval: 10000\r\n checkpoint_interval: 2000\r\n eval_interval: 100000\r\n tacotron_train_steps: 2000000\r\n tf_log_level: 1\r\n slack_url: None\r\n hparams: \r\n\r\nCheckpoint path: synthesizer/saved_models/logs-my_run\\taco_pretrained\\tacotron_model.ckpt\r\nLoading training data from: C:\\Users\\matve\\Documents\\Tacotron\\datasets\\SV2TTS\\synthesizer\\train.txt\r\nUsing model: Tacotron\r\nHyperparameters:\r\n allow_clipping_in_normalization: True\r\n attention_dim: 128\r\n attention_filters: 32\r\n attention_kernel: (31,)\r\n cbhg_conv_channels: 128\r\n cbhg_highway_units: 128\r\n cbhg_highwaynet_layers: 4\r\n cbhg_kernels: 8\r\n cbhg_pool_size: 2\r\n cbhg_projection: 256\r\n cbhg_projection_kernel_size: 3\r\n cbhg_rnn_units: 128\r\n cleaners: english_cleaners\r\n clip_for_wavenet: True\r\n clip_mels_length: True\r\n cross_entropy_pos_weight: 20\r\n cumulative_weights: True\r\n decoder_layers: 2\r\n decoder_lstm_units: 1024\r\n embedding_dim: 512\r\n enc_conv_channels: 512\r\n enc_conv_kernel_size: (5,)\r\n enc_conv_num_layers: 3\r\n encoder_lstm_units: 256\r\n fmax: 7600\r\n fmin: 55\r\n frame_shift_ms: None\r\n griffin_lim_iters: 60\r\n hop_size: 200\r\n mask_decoder: False\r\n mask_encoder: True\r\n max_abs_value: 4.0\r\n max_iters: 2000\r\n max_mel_frames: 900\r\n min_level_db: -100\r\n n_fft: 800\r\n natural_eval: False\r\n normalize_for_wavenet: True\r\n num_mels: 80\r\n outputs_per_step: 2\r\n postnet_channels: 512\r\n postnet_kernel_size: (5,)\r\n postnet_num_layers: 5\r\n power: 1.5\r\n predict_linear: False\r\n preemphasis: 0.97\r\n preemphasize: True\r\n prenet_layers: [256, 256]\r\n ref_level_db: 20\r\n rescale: True\r\n rescaling_max: 0.9\r\n sample_rate: 16000\r\n signal_normalization: True\r\n silence_min_duration_split: 0.4\r\n silence_threshold: 2\r\n smoothing: False\r\n speaker_embedding_size: 256\r\n split_on_cpu: True\r\n stop_at_any: True\r\n symmetric_mels: True\r\n tacotron_adam_beta1: 0.9\r\n tacotron_adam_beta2: 0.999\r\n tacotron_adam_epsilon: 1e-06\r\n tacotron_batch_size: 36\r\n tacotron_clip_gradients: True\r\n tacotron_data_random_state: 1234\r\n tacotron_decay_learning_rate: True\r\n tacotron_decay_rate: 0.5\r\n tacotron_decay_steps: 50000\r\n tacotron_dropout_rate: 0.5\r\n tacotron_final_learning_rate: 1e-05\r\n tacotron_gpu_start_idx: 0\r\n tacotron_initial_learning_rate: 0.001\r\n tacotron_num_gpus: 1\r\n tacotron_random_seed: 5339\r\n tacotron_reg_weight: 1e-07\r\n tacotron_scale_regularization: False\r\n tacotron_start_decay: 50000\r\n tacotron_swap_with_cpu: False\r\n tacotron_synthesis_batch_size: 128\r\n tacotron_teacher_forcing_decay_alpha: 0.0\r\n tacotron_teacher_forcing_decay_steps: 280000\r\n tacotron_teacher_forcing_final_ratio: 0.0\r\n tacotron_teacher_forcing_init_ratio: 1.0\r\n tacotron_teacher_forcing_mode: constant\r\n tacotron_teacher_forcing_ratio: 1.0\r\n tacotron_teacher_forcing_start_decay: 10000\r\n tacotron_test_batches: None\r\n tacotron_test_size: 0.05\r\n tacotron_zoneout_rate: 0.1\r\n train_with_GTA: False\r\n trim_fft_size: 512\r\n trim_hop_size: 128\r\n trim_top_db: 23\r\n use_lws: False\r\n utterance_min_duration: 1.6\r\n win_size: 800\r\nLoaded metadata for 290550 examples (366.70 hours)\r\ninitialisation done /gpu:0\r\nInitialized Tacotron model. Dimensions (? = dynamic shape): \r\n Train mode: True\r\n Eval mode: False\r\n GTA mode: False\r\n Synthesis mode: False\r\n Input: (?, ?)\r\n device: 0\r\n embedding: (?, ?, 512)\r\n enc conv out: (?, ?, 512)\r\n encoder out (cond): (?, ?, 768)\r\n decoder out: (?, ?, 80)\r\n residual out: (?, ?, 512)\r\n projected residual out: (?, ?, 80)\r\n mel out: (?, ?, 80)\r\n out: (?, ?)\r\n Tacotron Parameters 28.439 Million.\r\ninitialisation done /gpu:0\r\nInitialized Tacotron model. Dimensions (? = dynamic shape): \r\n Train mode: False\r\n Eval mode: True\r\n GTA mode: False\r\n Synthesis mode: False\r\n Input: (?, ?)\r\n device: 0\r\n embedding: (?, ?, 512)\r\n enc conv out: (?, ?, 512)\r\n encoder out (cond): (?, ?, 768)\r\n decoder out: (?, ?, 80)\r\n residual out: (?, ?, 512)\r\n projected residual out: (?, ?, 80)\r\n mel out: (?, ?, 80)\r\n out: (?, ?)\r\n Tacotron Parameters 28.439 Million.\r\nTacotron training set to a maximum of 2000000 steps\r\nLoading checkpoint synthesizer/saved_models/logs-my_run\\taco_pretrained\\tacotron_model.ckpt-0\r\n\r\nGenerated 64 train batches of size 36 in 3.626 sec\r\nStep 1 [5.798 sec/step, loss=14.85899, avg_loss=14.85899]\r\nStep 1 [5.798 sec/step, loss=14.85899, avg_loss=14.85899]\r\n\r\nSaving Model Character Embeddings visualization..\r\nTacotron Character embeddings have been updated on tensorboard!\r\nStep 2 [3.362 sec/step, loss=11.10468, avg_loss=12.98183]\r\nStep 2 [3.362 sec/step, loss=11.10468, avg_loss=12.98183]\r\n\r\nGenerated 403 test batches of size 36 in 15.574 sec\r\nExiting due to exception: 2 root error(s) found.\r\n (0) Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\r\n\t [[node Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d (defined at e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py:1748) ]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n\t [[Tacotron_model/clip_by_global_norm/mul_30/_479]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n (1) Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\r\n\t [[node Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d (defined at e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py:1748) ]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n0 successful operations.\r\n0 derived errors ignored.\r\n\r\nOriginal stack trace for 'Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d':\r\n File \"synthesizer_train.py\", line 55, in \r\n tacotron_train(args, log_dir, hparams)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 392, in tacotron_train\r\n return train(log_dir, args, hparams)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 148, in train\r\n model, stats = model_train_mode(args, feeder, hparams, global_step)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 91, in model_train_mode\r\n is_training=True, split_infos=feeder.split_infos)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\tacotron.py\", line 230, in initialize\r\n residual = postnet(decoder_output)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\modules.py\", line 406, in __call__\r\n \"conv_layer_{}_\".format(i + 1) + self.scope)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\modules.py\", line 420, in conv1d\r\n padding=\"same\")\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 324, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\layers\\convolutional.py\", line 218, in conv1d\r\n return layer.apply(inputs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 324, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\base_layer.py\", line 1700, in apply\r\n return self.__call__(inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\layers\\base.py\", line 548, in __call__\r\n outputs = super(Layer, self).__call__(inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\base_layer.py\", line 854, in __call__\r\n outputs = call_fn(cast_inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 234, in wrapper\r\n return converted_call(f, options, args, kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 439, in converted_call\r\n return _call_unconverted(f, args, kwargs, options)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 330, in _call_unconverted\r\n return f(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\layers\\convolutional.py\", line 387, in call\r\n return super(Conv1D, self).call(inputs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\layers\\convolutional.py\", line 197, in call\r\n outputs = self._convolution_op(inputs, self.kernel)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 1134, in __call__\r\n return self.conv_op(inp, filter)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 639, in __call__\r\n return self.call(inp, filter)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 238, in __call__\r\n name=self.name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 227, in _conv1d\r\n name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 574, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 574, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 1681, in conv1d\r\n name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\gen_nn_ops.py\", line 1071, in conv2d\r\n data_format=data_format, dilations=dilations, name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\op_def_library.py\", line 794, in _apply_op_helper\r\n op_def=op_def)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 507, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 3357, in create_op\r\n attrs, op_def, compute_device)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 3426, in _create_op_internal\r\n op_def=op_def)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1748, in __init__\r\n self._traceback = tf_stack.extract_stack()\r\n\r\n2021-02-05 20:02:33.232435: W tensorflow/core/kernels/queue_base.cc:277] _1_datafeeder/eval_queue: Skipping cancelled enqueue attempt with queue not closed\r\n2021-02-05 20:02:33.232577: W tensorflow/core/kernels/queue_base.cc:277] _0_datafeeder/input_queue: Skipping cancelled enqueue attempt with queue not closed\r\n\r\n```\r\nI think it can't use the memory of my GTX 1660 super .Tell the noob what to do\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [243]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "77c0bd169d8158ed1cdb180cda73c24d3cacd778", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1263", "iss_label": "", "title": "Python 3.10.12 is not supported ", "body": "When I ran python3.10 -m pip install numpy==1.20.3 on linux mint, I got an error while I was trying to install it. But it was totally fine when I used python3.8\r\n![12](https://github.com/CorentinJ/Real-Time-Voice-Cloning/assets/100217654/99071c68-bf38-4ffe-b789-9d292ed539a5)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '77c0bd169d8158ed1cdb180cda73c24d3cacd778', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, None)': {'mod': [4]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/250", "iss_label": "", "title": "[Errno 2] No such file or directory: 'encoder/_sources.txt'", "body": "I have this problem, but I can't understand what does this file contain? There is not _sources.txt in this repo", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'encoder_preprocess.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["encoder_preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5e400d474043044ba0e3e907a74b4baccb16ee7c", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/425", "iss_label": "", "title": "Tensorflow.contrib file missing what to do", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5e400d474043044ba0e3e907a74b4baccb16ee7c', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 35)': {'mod': [35]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nand\n2\n\u8fd9\u91cc\u662f\u6307\u5bfc\u662fdoc\n\u95ee\u9898\u539f\u56e0\u662f\u4f9d\u8d56\u7684\u5e93\u7684\u7248\u672c", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "9553eaa1748cf94814be322ec7b096d2d6bc7f28", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/419", "iss_label": "", "title": "Getting an exception when browsing for files", "body": "For some reason, importing mp3 files is not working. Anyone got an idea on why this might be the case.?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9553eaa1748cf94814be322ec7b096d2d6bc7f28', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 40)': {'mod': [40]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/221", "iss_label": "", "title": "A couple inquiries about the colab version", "body": "So I have a setup using a copy of the colaboratory version, but I want to be able to generate a few sentences at a time without having to generate per sentence.\r\n\r\nI understand that commas and periods don't work, but in the demonstration video it was mentioned that line breaks are a way to get around this for now... however that's done in the toolbox application. How would it be done in code?\r\n\r\nI've tried \\n but I assume that's only for print related arguments... but I'm fairly new to Python so excuse my ignorance.\r\n\r\nOn top of this, how could I improve the voice in colab? In regards to training, it's mentioned that a decent session requires around 500gb or more... since I don't exactly have that in colab, is there another way to go about doing this?\r\n\r\nI've tried the code with the input being longer than 10 seconds, but apparently if the input is more than 10 seconds or so the voice seems more jittery than it would be if it were capped at 10 seconds. I absolutely applaud this repo but I just really need to understand it a bit better... Thanks in advance.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'toolbox/__init__.py', 'Loc': {\"('Toolbox', 'synthesize', 158)\": {'mod': [170, 171, 172, 173, 174, 175]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/225", "iss_label": "", "title": "Not code-savy but want to experiment with code", "body": "I have Python Spyder downloaded, but I do not know much about coding, or how to get to the stage where I can add audio and synthesize it. What would you recommend?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "070a3c187f87136ebe92aa72766f8343772d414e", "is_iss": 0, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/378", "iss_label": "", "title": "i cant install NVIDIA CUDA", "body": "I can't install NVIDIA CUDA even though I followed everything that [this guide](https://poorlydocumented.com/2019/11/installing-corentinjs-real-time-voice-cloning-project-on-windows-10-from-scratch/l) told me to do. I also have tried searching for this problem on the internet, but none of them solves my problem. I also have provided the image of the error [here](https://imgur.com/a/fYkiBYQ).\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '070a3c187f87136ebe92aa72766f8343772d414e', 'files': [{'path': 'demo_cli.py', 'Loc': {'(None, None, None)': {'mod': [34]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["demo_cli.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "9553eaa1748cf94814be322ec7b096d2d6bc7f28", "is_iss": 1, "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/420", "iss_label": "", "title": "New Audio Issue: Assertion Failed", "body": "This was working yesterday fine, and no big changes were made. \r\nHowever, today starting up the demo toolbox loaded:\r\nAssertion failed!\r\n\r\nProgram: C:\\Users\\paul1\\AppData\\Local\\Programs\\Python\\Python37\\python.exe\r\nFile: src/hostapi/wdmks/pa_win_wdmks.c, Line 1061\r\n\r\nExpression: FALSE\r\n\r\nI have tried reinstalling visual studio as well, but to no avail. Any thoughts on this would be deeply appreciated.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "sounddevice"}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "\u5e93"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["sounddevice"]}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "39827a3998afa3ea612e7cc9a475085765d4d509", "is_iss": 0, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5134", "iss_label": "asking-for-help-with-local-system-issues", "title": "[Bug]: Non checkpoints found. Can't run without a checkpoint.", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nDuring the installation (windows), an error occurs :\r\n```\r\nvenv \"G:\\Dev\\stable-diffusion-webui\\venv\\Scripts\\Python.exe\"\r\nPython 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]\r\nCommit hash: 9e78d2c419732711e984c4478af15ece121d64fd\r\nInstalling requirements for Web UI\r\nLaunching Web UI with arguments:\r\nNo module 'xformers'. Proceeding without it.\r\nNo checkpoints found. When searching for checkpoints, looked at:\r\n - file G:\\Dev\\stable-diffusion-webui\\model.ckpt\r\n - directory G:\\Dev\\stable-diffusion-webui\\models\\Stable-diffusion\r\nCan't run without a checkpoint. Find and place a .ckpt file into any of those locations. The program will exit.\r\n```\n\n### Steps to reproduce the problem\n\nLaunch webui-user.bat\n\n### What should have happened?\n\nInstallation complete\n\n### Commit where the problem happens\n\n9e78d2c419732711e984c4478af15ece121d64fd\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n_No response_\n\n### Additional information, context and logs\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '39827a3998afa3ea612e7cc9a475085765d4d509', 'files': [{'path': 'modules/sd_models.py', 'Loc': {\"(None, 'load_model', 230)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": ["modules/sd_models.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c", "is_iss": 0, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11458", "iss_label": "bug-report", "title": "[Bug]: ModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nLaunching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\r\n2023-06-27 13:53:22.297173: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-06-27 13:53:23.287285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 /content/microsoftexcel/launch.py:38 in \u2502\r\n\u2502 \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2771 38 \u2502 main() \u2502\r\n\u2502 39 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/launch.py:34 in main \u2502\r\n\u2502 \u2502\r\n\u2502 31 \u2502 if args.test_server: \u2502\r\n\u2502 32 \u2502 \u2502 configure_for_tests() \u2502\r\n\u2502 33 \u2502 \u2502\r\n\u2502 \u2771 34 \u2502 start() \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/launch_utils.py:340 in start \u2502\r\n\u2502 \u2502\r\n\u2502 337 \u2502\r\n\u2502 338 def start(): \u2502\r\n\u2502 339 \u2502 print(f\"Launching {'API server' if '--nowebui' in sys.argv else 'W \u2502\r\n\u2502 \u2771 340 \u2502 import webui \u2502\r\n\u2502 341 \u2502 if '--nowebui' in sys.argv: \u2502\r\n\u2502 342 \u2502 \u2502 webui.api_only() \u2502\r\n\u2502 343 \u2502 else: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/webui.py:42 in \u2502\r\n\u2502 \u2502\r\n\u2502 39 startup_timer.record(\"import ldm\") \u2502\r\n\u2502 40 \u2502\r\n\u2502 41 from modules import extra_networks \u2502\r\n\u2502 \u2771 42 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, \u2502\r\n\u2502 43 \u2502\r\n\u2502 44 # Truncate version number of nightly/local build of PyTorch to not cau \u2502\r\n\u2502 45 if \".dev\" in torch.__version__ or \"+git\" in torch.__version__: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/call_queue.py:5 in \u2502\r\n\u2502 \u2502\r\n\u2502 2 import threading \u2502\r\n\u2502 3 import time \u2502\r\n\u2502 4 \u2502\r\n\u2502 \u2771 5 from modules import shared, progress, errors \u2502\r\n\u2502 6 \u2502\r\n\u2502 7 queue_lock = threading.Lock() \u2502\r\n\u2502 8 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/shared.py:18 in \u2502\r\n\u2502 \u2502\r\n\u2502 15 import modules.devices as devices \u2502\r\n\u2502 16 from modules import localization, script_loading, errors, ui_component \u2502\r\n\u2502 17 from modules.paths_internal import models_path, script_path, data_path \u2502\r\n\u2502 \u2771 18 from ldm.models.diffusion.ddpm import LatentDiffusion \u2502\r\n\u2502 19 from typing import Optional \u2502\r\n\u2502 20 \u2502\r\n\u2502 21 demo = None \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/repositories/stable-diffusion-stability-ai/ldm/model \u2502\r\n\u2502 s/diffusion/ddpm.py:20 in \u2502\r\n\u2502 \u2502\r\n\u2502 17 import itertools \u2502\r\n\u2502 18 from tqdm import tqdm \u2502\r\n\u2502 19 from torchvision.utils import make_grid \u2502\r\n\u2502 \u2771 20 from pytorch_lightning.utilities.distributed import rank_zero_only \u2502\r\n\u2502 21 from omegaconf import ListConfig \u2502\r\n\u2502 22 \u2502\r\n\u2502 23 from ldm.util import log_txt_as_img, exists, default, ismap, isimage, \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'\n\n### Steps to reproduce the problem\n\n1. on colab\r\n2. try to use the new 1.4.0 release\r\n3. error\n\n### What should have happened?\n\nno error\n\n### Version or Commit where the problem happens\n\n1.4.0\n\n### What Python version are you running on ?\n\nNone\n\n### What platforms do you use to access the UI ?\n\nOther/Cloud\n\n### What device are you running WebUI on?\n\n_No response_\n\n### Cross attention optimization\n\nAutomatic\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n```Shell\n!COMMANDLINE_ARGS=\"--share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\" REQS_FILE=\"requirements.txt\" python launch.py\n```\n\n\n### List of extensions\n\nsd-webui-tunnels\r\ncontrolnet\r\nopenpose-editor\r\nposex\r\na1111-sd-webui-tagcomplete\r\nsupermerger\r\nultimate-upscale-for-automatic1111\r\na111 locon extension\r\nimages browser\r\n\n\n### Console logs\n\n```Shell\n**truncated on colab**\r\n\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1217 100 1217 0 0 3699 0 --:--:-- --:--:-- --:--:-- 3699\r\n100 1722k 100 1722k 0 0 670k 0 0:00:02 0:00:02 --:--:-- 1355k\r\nArchive: /content/microsoftexcel.zip\r\n creating: microsoftexcel/\r\n inflating: microsoftexcel/.eslintignore \r\n inflating: microsoftexcel/.eslintrc.js \r\n inflating: microsoftexcel/.git-blame-ignore-revs \r\n creating: microsoftexcel/.github/\r\n creating: microsoftexcel/.github/ISSUE_TEMPLATE/\r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/bug_report.yml \r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/config.yml \r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/feature_request.yml \r\n inflating: microsoftexcel/.github/pull_request_template.md \r\n creating: microsoftexcel/.github/workflows/\r\n inflating: microsoftexcel/.github/workflows/on_pull_request.yaml \r\n inflating: microsoftexcel/.github/workflows/run_tests.yaml \r\n inflating: microsoftexcel/.gitignore \r\n inflating: microsoftexcel/.pylintrc \r\n inflating: microsoftexcel/CHANGELOG.md \r\n inflating: microsoftexcel/CODEOWNERS \r\n creating: microsoftexcel/configs/\r\n inflating: microsoftexcel/configs/alt-diffusion-inference.yaml \r\n inflating: microsoftexcel/configs/instruct-pix2pix.yaml \r\n inflating: microsoftexcel/configs/v1-inference.yaml \r\n inflating: microsoftexcel/configs/v1-inpainting-inference.yaml \r\n creating: microsoftexcel/embeddings/\r\n extracting: microsoftexcel/embeddings/Place Textual Inversion embeddings here.txt \r\n inflating: microsoftexcel/environment-wsl2.yaml \r\n creating: microsoftexcel/extensions/\r\n extracting: microsoftexcel/extensions/put extensions here.txt \r\n creating: microsoftexcel/extensions-builtin/\r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/\r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/javascript/\r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js \r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/scripts/\r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/scripts/hotkey_config.py \r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/style.css \r\n creating: microsoftexcel/extensions-builtin/extra-options-section/\r\n creating: microsoftexcel/extensions-builtin/extra-options-section/scripts/\r\n inflating: microsoftexcel/extensions-builtin/extra-options-section/scripts/extra_options_section.py \r\n creating: microsoftexcel/extensions-builtin/LDSR/\r\n inflating: microsoftexcel/extensions-builtin/LDSR/ldsr_model_arch.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/preload.py \r\n creating: microsoftexcel/extensions-builtin/LDSR/scripts/\r\n inflating: microsoftexcel/extensions-builtin/LDSR/scripts/ldsr_model.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/sd_hijack_autoencoder.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/vqvae_quantize.py \r\n creating: microsoftexcel/extensions-builtin/Lora/\r\n inflating: microsoftexcel/extensions-builtin/Lora/extra_networks_lora.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/lora.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/preload.py \r\n creating: microsoftexcel/extensions-builtin/Lora/scripts/\r\n inflating: microsoftexcel/extensions-builtin/Lora/scripts/lora_script.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/ui_extra_networks_lora.py \r\n creating: microsoftexcel/extensions-builtin/prompt-bracket-checker/\r\n creating: microsoftexcel/extensions-builtin/prompt-bracket-checker/javascript/\r\n inflating: microsoftexcel/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js \r\n creating: microsoftexcel/extensions-builtin/ScuNET/\r\n inflating: microsoftexcel/extensions-builtin/ScuNET/preload.py \r\n creating: microsoftexcel/extensions-builtin/ScuNET/scripts/\r\n inflating: microsoftexcel/extensions-builtin/ScuNET/scripts/scunet_model.py \r\n inflating: microsoftexcel/extensions-builtin/ScuNET/scunet_model_arch.py \r\n creating: microsoftexcel/extensions-builtin/SwinIR/\r\n inflating: microsoftexcel/extensions-builtin/SwinIR/preload.py \r\n creating: microsoftexcel/extensions-builtin/SwinIR/scripts/\r\n inflating: microsoftexcel/extensions-builtin/SwinIR/scripts/swinir_model.py \r\n inflating: microsoftexcel/extensions-builtin/SwinIR/swinir_model_arch.py \r\n inflating: microsoftexcel/extensions-builtin/SwinIR/swinir_model_arch_v2.py \r\n creating: microsoftexcel/html/\r\n inflating: microsoftexcel/html/card-no-preview.png \r\n inflating: microsoftexcel/html/extra-networks-card.html \r\n inflating: microsoftexcel/html/extra-networks-no-cards.html \r\n inflating: microsoftexcel/html/footer.html \r\n inflating: microsoftexcel/html/image-update.svg \r\n inflating: microsoftexcel/html/licenses.html \r\n creating: microsoftexcel/javascript/\r\n inflating: microsoftexcel/javascript/aspectRatioOverlay.js \r\n inflating: microsoftexcel/javascript/contextMenus.js \r\n inflating: microsoftexcel/javascript/dragdrop.js \r\n inflating: microsoftexcel/javascript/edit-attention.js \r\n inflating: microsoftexcel/javascript/extensions.js \r\n inflating: microsoftexcel/javascript/extraNetworks.js \r\n inflating: microsoftexcel/javascript/generationParams.js \r\n inflating: microsoftexcel/javascript/hints.js \r\n inflating: microsoftexcel/javascript/hires_fix.js \r\n inflating: microsoftexcel/javascript/imageMaskFix.js \r\n inflating: microsoftexcel/javascript/imageviewer.js \r\n inflating: microsoftexcel/javascript/imageviewerGamepad.js \r\n inflating: microsoftexcel/javascript/localization.js \r\n inflating: microsoftexcel/javascript/notification.js \r\n inflating: microsoftexcel/javascript/profilerVisualization.js \r\n inflating: microsoftexcel/javascript/progressbar.js \r\n inflating: microsoftexcel/javascript/textualInversion.js \r\n inflating: microsoftexcel/javascript/token-counters.js \r\n inflating: microsoftexcel/javascript/ui.js \r\n inflating: microsoftexcel/javascript/ui_settings_hints.js \r\n inflating: microsoftexcel/launch.py \r\n inflating: microsoftexcel/LICENSE.txt \r\n creating: microsoftexcel/localizations/\r\n extracting: microsoftexcel/localizations/Put localization files here.txt \r\n creating: microsoftexcel/models/\r\n creating: microsoftexcel/models/deepbooru/\r\n extracting: microsoftexcel/models/deepbooru/Put your deepbooru release project folder here.txt \r\n creating: microsoftexcel/models/karlo/\r\n inflating: microsoftexcel/models/karlo/ViT-L-14_stats.th \r\n creating: microsoftexcel/models/Stable-diffusion/\r\n extracting: microsoftexcel/models/Stable-diffusion/Put Stable Diffusion checkpoints here.txt \r\n creating: microsoftexcel/models/VAE/\r\n extracting: microsoftexcel/models/VAE/Put VAE here.txt \r\n creating: microsoftexcel/models/VAE-approx/\r\n inflating: microsoftexcel/models/VAE-approx/model.pt \r\n creating: microsoftexcel/modules/\r\n creating: microsoftexcel/modules/api/\r\n inflating: microsoftexcel/modules/api/api.py \r\n inflating: microsoftexcel/modules/api/models.py \r\n inflating: microsoftexcel/modules/call_queue.py \r\n inflating: microsoftexcel/modules/cmd_args.py \r\n creating: microsoftexcel/modules/codeformer/\r\n inflating: microsoftexcel/modules/codeformer/codeformer_arch.py \r\n inflating: microsoftexcel/modules/codeformer/vqgan_arch.py \r\n inflating: microsoftexcel/modules/codeformer_model.py \r\n inflating: microsoftexcel/modules/config_states.py \r\n inflating: microsoftexcel/modules/deepbooru.py \r\n inflating: microsoftexcel/modules/deepbooru_model.py \r\n inflating: microsoftexcel/modules/devices.py \r\n inflating: microsoftexcel/modules/errors.py \r\n inflating: microsoftexcel/modules/esrgan_model.py \r\n inflating: microsoftexcel/modules/esrgan_model_arch.py \r\n inflating: microsoftexcel/modules/extensions.py \r\n inflating: microsoftexcel/modules/extras.py \r\n inflating: microsoftexcel/modules/extra_networks.py \r\n inflating: microsoftexcel/modules/extra_networks_hypernet.py \r\n inflating: microsoftexcel/modules/face_restoration.py \r\n inflating: microsoftexcel/modules/generation_parameters_copypaste.py \r\n inflating: microsoftexcel/modules/gfpgan_model.py \r\n inflating: microsoftexcel/modules/gitpython_hack.py \r\n inflating: microsoftexcel/modules/hashes.py \r\n creating: microsoftexcel/modules/hypernetworks/\r\n inflating: microsoftexcel/modules/hypernetworks/hypernetwork.py \r\n inflating: microsoftexcel/modules/hypernetworks/ui.py \r\n inflating: microsoftexcel/modules/images.py \r\n inflating: microsoftexcel/modules/img2img.py \r\n inflating: microsoftexcel/modules/import_hook.py \r\n inflating: microsoftexcel/modules/interrogate.py \r\n inflating: microsoftexcel/modules/launch_utils.py \r\n inflating: microsoftexcel/modules/localization.py \r\n inflating: microsoftexcel/modules/lowvram.py \r\n inflating: microsoftexcel/modules/mac_specific.py \r\n inflating: microsoftexcel/modules/masking.py \r\n inflating: microsoftexcel/modules/memmon.py \r\n inflating: microsoftexcel/modules/modelloader.py \r\n creating: microsoftexcel/modules/models/\r\n creating: microsoftexcel/modules/models/diffusion/\r\n inflating: microsoftexcel/modules/models/diffusion/ddpm_edit.py \r\n creating: microsoftexcel/modules/models/diffusion/uni_pc/\r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/sampler.py \r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/uni_pc.py \r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/__init__.py \r\n inflating: microsoftexcel/modules/ngrok.py \r\n inflating: microsoftexcel/modules/paths.py \r\n inflating: microsoftexcel/modules/paths_internal.py \r\n inflating: microsoftexcel/modules/postprocessing.py \r\n inflating: microsoftexcel/modules/processing.py \r\n inflating: microsoftexcel/modules/progress.py \r\n inflating: microsoftexcel/modules/prompt_parser.py \r\n inflating: microsoftexcel/modules/realesrgan_model.py \r\n inflating: microsoftexcel/modules/restart.py \r\n inflating: microsoftexcel/modules/Roboto-Regular.ttf \r\n inflating: microsoftexcel/modules/safe.py \r\n inflating: microsoftexcel/modules/scripts.py \r\n inflating: microsoftexcel/modules/scripts_auto_postprocessing.py \r\n inflating: microsoftexcel/modules/scripts_postprocessing.py \r\n inflating: microsoftexcel/modules/script_callbacks.py \r\n inflating: microsoftexcel/modules/script_loading.py \r\n inflating: microsoftexcel/modules/sd_disable_initialization.py \r\n inflating: microsoftexcel/modules/sd_hijack.py \r\n inflating: microsoftexcel/modules/sd_hijack_checkpoint.py \r\n inflating: microsoftexcel/modules/sd_hijack_clip.py \r\n inflating: microsoftexcel/modules/sd_hijack_clip_old.py \r\n inflating: microsoftexcel/modules/sd_hijack_inpainting.py \r\n inflating: microsoftexcel/modules/sd_hijack_ip2p.py \r\n inflating: microsoftexcel/modules/sd_hijack_open_clip.py \r\n inflating: microsoftexcel/modules/sd_hijack_optimizations.py \r\n inflating: microsoftexcel/modules/sd_hijack_unet.py \r\n inflating: microsoftexcel/modules/sd_hijack_utils.py \r\n inflating: microsoftexcel/modules/sd_hijack_xlmr.py \r\n inflating: microsoftexcel/modules/sd_models.py \r\n inflating: microsoftexcel/modules/sd_models_config.py \r\n inflating: microsoftexcel/modules/sd_samplers.py \r\n inflating: microsoftexcel/modules/sd_samplers_common.py \r\n inflating: microsoftexcel/modules/sd_samplers_compvis.py \r\n inflating: microsoftexcel/modules/sd_samplers_kdiffusion.py \r\n inflating: microsoftexcel/modules/sd_unet.py \r\n inflating: microsoftexcel/modules/sd_vae.py \r\n inflating: microsoftexcel/modules/sd_vae_approx.py \r\n inflating: microsoftexcel/modules/sd_vae_taesd.py \r\n inflating: microsoftexcel/modules/shared.py \r\n inflating: microsoftexcel/modules/shared_items.py \r\n inflating: microsoftexcel/modules/styles.py \r\n inflating: microsoftexcel/modules/sub_quadratic_attention.py \r\n inflating: microsoftexcel/modules/sysinfo.py \r\n creating: microsoftexcel/modules/textual_inversion/\r\n inflating: microsoftexcel/modules/textual_inversion/autocrop.py \r\n inflating: microsoftexcel/modules/textual_inversion/dataset.py \r\n inflating: microsoftexcel/modules/textual_inversion/image_embedding.py \r\n inflating: microsoftexcel/modules/textual_inversion/learn_schedule.py \r\n inflating: microsoftexcel/modules/textual_inversion/logging.py \r\n inflating: microsoftexcel/modules/textual_inversion/preprocess.py \r\n inflating: microsoftexcel/modules/textual_inversion/test_embedding.png \r\n inflating: microsoftexcel/modules/textual_inversion/textual_inversion.py \r\n inflating: microsoftexcel/modules/textual_inversion/ui.py \r\n inflating: microsoftexcel/modules/timer.py \r\n inflating: microsoftexcel/modules/txt2img.py \r\n inflating: microsoftexcel/modules/ui.py \r\n inflating: microsoftexcel/modules/ui_common.py \r\n inflating: microsoftexcel/modules/ui_components.py \r\n inflating: microsoftexcel/modules/ui_extensions.py \r\n inflating: microsoftexcel/modules/ui_extra_networks.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_checkpoints.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_hypernets.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_textual_inversion.py \r\n inflating: microsoftexcel/modules/ui_gradio_extensions.py \r\n inflating: microsoftexcel/modules/ui_loadsave.py \r\n inflating: microsoftexcel/modules/ui_postprocessing.py \r\n inflating: microsoftexcel/modules/ui_settings.py \r\n inflating: microsoftexcel/modules/ui_tempdir.py \r\n inflating: microsoftexcel/modules/upscaler.py \r\n inflating: microsoftexcel/modules/xlmr.py \r\n inflating: microsoftexcel/package.json \r\n inflating: microsoftexcel/pyproject.toml \r\n inflating: microsoftexcel/README.md \r\n inflating: microsoftexcel/requirements-test.txt \r\n inflating: microsoftexcel/requirements.txt \r\n inflating: microsoftexcel/requirements_versions.txt \r\n inflating: microsoftexcel/screenshot.png \r\n inflating: microsoftexcel/script.js \r\n creating: microsoftexcel/scripts/\r\n inflating: microsoftexcel/scripts/custom_code.py \r\n inflating: microsoftexcel/scripts/img2imgalt.py \r\n inflating: microsoftexcel/scripts/loopback.py \r\n inflating: microsoftexcel/scripts/outpainting_mk_2.py \r\n inflating: microsoftexcel/scripts/poor_mans_outpainting.py \r\n inflating: microsoftexcel/scripts/postprocessing_codeformer.py \r\n inflating: microsoftexcel/scripts/postprocessing_gfpgan.py \r\n inflating: microsoftexcel/scripts/postprocessing_upscale.py \r\n inflating: microsoftexcel/scripts/prompts_from_file.py \r\n inflating: microsoftexcel/scripts/prompt_matrix.py \r\n inflating: microsoftexcel/scripts/sd_upscale.py \r\n inflating: microsoftexcel/scripts/xyz_grid.py \r\n inflating: microsoftexcel/style.css \r\n creating: microsoftexcel/test/\r\n inflating: microsoftexcel/test/conftest.py \r\n inflating: microsoftexcel/test/test_extras.py \r\n creating: microsoftexcel/test/test_files/\r\n inflating: microsoftexcel/test/test_files/empty.pt \r\n inflating: microsoftexcel/test/test_files/img2img_basic.png \r\n inflating: microsoftexcel/test/test_files/mask_basic.png \r\n inflating: microsoftexcel/test/test_img2img.py \r\n inflating: microsoftexcel/test/test_txt2img.py \r\n inflating: microsoftexcel/test/test_utils.py \r\n extracting: microsoftexcel/test/__init__.py \r\n creating: microsoftexcel/textual_inversion_templates/\r\n inflating: microsoftexcel/textual_inversion_templates/hypernetwork.txt \r\n inflating: microsoftexcel/textual_inversion_templates/none.txt \r\n inflating: microsoftexcel/textual_inversion_templates/style.txt \r\n inflating: microsoftexcel/textual_inversion_templates/style_filewords.txt \r\n inflating: microsoftexcel/textual_inversion_templates/subject.txt \r\n inflating: microsoftexcel/textual_inversion_templates/subject_filewords.txt \r\n inflating: microsoftexcel/webui-macos-env.sh \r\n inflating: microsoftexcel/webui-user.bat \r\n inflating: microsoftexcel/webui-user.sh \r\n inflating: microsoftexcel/webui.bat \r\n inflating: microsoftexcel/webui.py \r\n inflating: microsoftexcel/webui.sh \r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-tunnels'...\r\nremote: Enumerating objects: 143, done.\r\nremote: Counting objects: 100% (38/38), done.\r\nremote: Compressing objects: 100% (14/14), done.\r\nremote: Total 143 (delta 35), reused 24 (delta 24), pack-reused 105\r\nReceiving objects: 100% (143/143), 26.38 KiB | 13.19 MiB/s, done.\r\nResolving deltas: 100% (62/62), done.\r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-controlnet'...\r\nremote: Enumerating objects: 7327, done.\r\nremote: Counting objects: 100% (2275/2275), done.\r\nremote: Compressing objects: 100% (128/128), done.\r\nremote: Total 7327 (delta 2172), reused 2178 (delta 2147), pack-reused 5052\r\nReceiving objects: 100% (7327/7327), 15.36 MiB | 9.38 MiB/s, done.\r\nResolving deltas: 100% (4220/4220), done.\r\nCloning into '/content/microsoftexcel/extensions/openpose-editor'...\r\nremote: Enumerating objects: 403, done.\r\nremote: Counting objects: 100% (123/123), done.\r\nremote: Compressing objects: 100% (56/56), done.\r\nremote: Total 403 (delta 88), reused 80 (delta 67), pack-reused 280\r\nReceiving objects: 100% (403/403), 1.15 MiB | 14.54 MiB/s, done.\r\nResolving deltas: 100% (170/170), done.\r\nCloning into '/content/microsoftexcel/extensions/posex'...\r\nremote: Enumerating objects: 407, done.\r\nremote: Counting objects: 100% (43/43), done.\r\nremote: Compressing objects: 100% (19/19), done.\r\nremote: Total 407 (delta 21), reused 35 (delta 19), pack-reused 364\r\nReceiving objects: 100% (407/407), 11.39 MiB | 8.04 MiB/s, done.\r\nResolving deltas: 100% (196/196), done.\r\nCloning into '/content/microsoftexcel/extensions/a1111-microsoftexcel-tagcomplete'...\r\nremote: Enumerating objects: 1341, done.\r\nremote: Counting objects: 100% (1341/1341), done.\r\nremote: Compressing objects: 100% (505/505), done.\r\nremote: Total 1341 (delta 783), reused 1251 (delta 775), pack-reused 0\r\nReceiving objects: 100% (1341/1341), 3.85 MiB | 4.02 MiB/s, done.\r\nResolving deltas: 100% (783/783), done.\r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-supermerger'...\r\nremote: Enumerating objects: 720, done.\r\nremote: Counting objects: 100% (237/237), done.\r\nremote: Compressing objects: 100% (94/94), done.\r\nremote: Total 720 (delta 180), reused 183 (delta 143), pack-reused 483\r\nReceiving objects: 100% (720/720), 307.44 KiB | 13.37 MiB/s, done.\r\nResolving deltas: 100% (374/374), done.\r\nCloning into '/content/microsoftexcel/extensions/ultimate-upscale-for-automatic1111'...\r\nremote: Enumerating objects: 309, done.\r\nremote: Counting objects: 100% (84/84), done.\r\nremote: Compressing objects: 100% (46/46), done.\r\nremote: Total 309 (delta 34), reused 64 (delta 23), pack-reused 225\r\nReceiving objects: 100% (309/309), 32.23 MiB | 11.17 MiB/s, done.\r\nResolving deltas: 100% (109/109), done.\r\nCloning into '/content/microsoftexcel/extensions/a1111-microsoftexcel-locon'...\r\nremote: Enumerating objects: 188, done.\r\nremote: Counting objects: 100% (43/43), done.\r\nremote: Compressing objects: 100% (20/20), done.\r\nremote: Total 188 (delta 18), reused 40 (delta 17), pack-reused 145\r\nReceiving objects: 100% (188/188), 47.64 KiB | 15.88 MiB/s, done.\r\nResolving deltas: 100% (93/93), done.\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1229 100 1229 0 0 4708 0 --:--:-- --:--:-- --:--:-- 4708\r\n100 68776 100 68776 0 0 239k 0 --:--:-- --:--:-- --:--:-- 239k\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1195 100 1195 0 0 5063 0 --:--:-- --:--:-- --:--:-- 5063\r\n100 1509k 100 1509k 0 0 5428k 0 --:--:-- --:--:-- --:--:-- 5428k\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1191 100 1191 0 0 4983 0 --:--:-- --:--:-- --:--:-- 4962\r\n100 118M 100 118M 0 0 212M 0 --:--:-- --:--:-- --:--:-- 212M\r\n/content/microsoftexcel/extensions\r\nArchive: /content/microsoftexcel/extensions/microsoftexcel-images-browser.zip\r\n creating: sd-webui-images-browser/\r\n inflating: sd-webui-images-browser/.DS_Store \r\n creating: sd-webui-images-browser/.git/\r\n creating: sd-webui-images-browser/.git/branches/\r\n inflating: sd-webui-images-browser/.git/config \r\n inflating: sd-webui-images-browser/.git/description \r\n inflating: sd-webui-images-browser/.git/HEAD \r\n creating: sd-webui-images-browser/.git/hooks/\r\n inflating: sd-webui-images-browser/.git/hooks/applypatch-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/commit-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/fsmonitor-watchman.sample \r\n inflating: sd-webui-images-browser/.git/hooks/post-update.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-applypatch.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-commit.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-merge-commit.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-push.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-rebase.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-receive.sample \r\n inflating: sd-webui-images-browser/.git/hooks/prepare-commit-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/update.sample \r\n inflating: sd-webui-images-browser/.git/index \r\n creating: sd-webui-images-browser/.git/info/\r\n inflating: sd-webui-images-browser/.git/info/exclude \r\n creating: sd-webui-images-browser/.git/logs/\r\n inflating: sd-webui-images-browser/.git/logs/HEAD \r\n creating: sd-webui-images-browser/.git/logs/refs/\r\n creating: sd-webui-images-browser/.git/logs/refs/heads/\r\n inflating: sd-webui-images-browser/.git/logs/refs/heads/main \r\n creating: sd-webui-images-browser/.git/logs/refs/remotes/\r\n creating: sd-webui-images-browser/.git/logs/refs/remotes/origin/\r\n inflating: sd-webui-images-browser/.git/logs/refs/remotes/origin/HEAD \r\n creating: sd-webui-images-browser/.git/objects/\r\n creating: sd-webui-images-browser/.git/objects/info/\r\n creating: sd-webui-images-browser/.git/objects/pack/\r\n inflating: sd-webui-images-browser/.git/objects/pack/pack-8c09dc0723b064b3aad4351dc4af51e311b0601c.idx \r\n inflating: sd-webui-images-browser/.git/objects/pack/pack-8c09dc0723b064b3aad4351dc4af51e311b0601c.pack \r\n inflating: sd-webui-images-browser/.git/packed-refs \r\n creating: sd-webui-images-browser/.git/refs/\r\n creating: sd-webui-images-browser/.git/refs/heads/\r\n inflating: sd-webui-images-browser/.git/refs/heads/main \r\n creating: sd-webui-images-browser/.git/refs/remotes/\r\n creating: sd-webui-images-browser/.git/refs/remotes/origin/\r\n inflating: sd-webui-images-browser/.git/refs/remotes/origin/HEAD \r\n creating: sd-webui-images-browser/.git/refs/tags/\r\n inflating: sd-webui-images-browser/.gitignore \r\n creating: sd-webui-images-browser/javascript/\r\n inflating: sd-webui-images-browser/javascript/images_history.js \r\n inflating: sd-webui-images-browser/README.md \r\n creating: sd-webui-images-browser/scripts/\r\n inflating: sd-webui-images-browser/scripts/images_history.py \r\n/content/microsoftexcel/embeddings\r\nArchive: /content/microsoftexcel/embeddings/embeddings.zip\r\n creating: embeddings/\r\n inflating: embeddings/21charturnerv2.pt \r\n inflating: embeddings/Asian-Less-Neg.pt \r\n inflating: embeddings/bad-artist-anime.pt \r\n inflating: embeddings/bad-artist.pt \r\n inflating: embeddings/bad-hands-5.pt \r\n inflating: embeddings/bad-image-v2-39000.pt \r\n inflating: embeddings/bad-picture-chill-75v.pt \r\n inflating: embeddings/BadDream.pt \r\n inflating: embeddings/badhandv4.pt \r\n inflating: embeddings/bad_pictures.pt \r\n inflating: embeddings/bad_prompt.pt \r\n inflating: embeddings/bad_prompt_version2.pt \r\n inflating: embeddings/charturnerv2.pt \r\n inflating: embeddings/CyberRealistic_Negative-neg.pt \r\n inflating: embeddings/easynegative.safetensors \r\n inflating: embeddings/EasyNegativeV2.safetensors \r\n inflating: embeddings/epiCNegative.pt \r\n inflating: embeddings/epiCRealism.pt \r\n inflating: embeddings/FastNegativeEmbedding.pt \r\n inflating: embeddings/HyperStylizeV6.pt \r\n inflating: embeddings/nartfixer.pt \r\n inflating: embeddings/negative_hand-neg.pt \r\n inflating: embeddings/nfixer.pt \r\n inflating: embeddings/ng_deepnegative_v1_75t.pt \r\n inflating: embeddings/nrealfixer.pt \r\n inflating: embeddings/pureerosface_v1.pt \r\n inflating: embeddings/rmadanegative402_sd15-neg.pt \r\n inflating: embeddings/ulzzang-6500-v1.1.bin \r\n inflating: embeddings/ulzzang-6500.pt \r\n inflating: embeddings/UnrealisticDream.pt \r\n inflating: embeddings/verybadimagenegative_v1.3.pt \r\n/content/microsoftexcel/models/ESRGAN\r\nArchive: /content/microsoftexcel/models/ESRGAN/upscalers.zip\r\n inflating: 4x-UltraSharp.pth \r\n inflating: 4x_foolhardy_Remacri.pth \r\n/content\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1133 100 1133 0 0 4800 0 --:--:-- --:--:-- --:--:-- 4800\r\n100 4067M 100 4067M 0 0 221M 0 0:00:18 0:00:18 --:--:-- 242M\r\n/content/microsoftexcel\r\nfatal: not a git repository (or any of the parent directories): .git\r\nfatal: not a git repository (or any of the parent directories): .git\r\nPython 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0]\r\nVersion: ## 1.4.0\r\nCommit hash: \r\nInstalling gfpgan\r\nInstalling clip\r\nInstalling open_clip\r\nInstalling xformers\r\nCloning Stable Diffusion into /content/microsoftexcel/repositories/stable-diffusion-stability-ai...\r\nCloning K-diffusion into /content/microsoftexcel/repositories/k-diffusion...\r\nCloning CodeFormer into /content/microsoftexcel/repositories/CodeFormer...\r\nCloning BLIP into /content/microsoftexcel/repositories/BLIP...\r\nInstalling requirements for CodeFormer\r\nInstalling requirements\r\nInstalling sd-webui-controlnet requirement: mediapipe\r\nInstalling sd-webui-controlnet requirement: svglib\r\nInstalling sd-webui-controlnet requirement: fvcore\r\n\r\nInstalling pycloudflared\r\n\r\nInstalling diffusers\r\n\r\nLaunching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\r\n2023-06-27 13:53:22.297173: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-06-27 13:53:23.287285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 /content/microsoftexcel/launch.py:38 in \u2502\r\n\u2502 \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2771 38 \u2502 main() \u2502\r\n\u2502 39 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/launch.py:34 in main \u2502\r\n\u2502 \u2502\r\n\u2502 31 \u2502 if args.test_server: \u2502\r\n\u2502 32 \u2502 \u2502 configure_for_tests() \u2502\r\n\u2502 33 \u2502 \u2502\r\n\u2502 \u2771 34 \u2502 start() \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/launch_utils.py:340 in start \u2502\r\n\u2502 \u2502\r\n\u2502 337 \u2502\r\n\u2502 338 def start(): \u2502\r\n\u2502 339 \u2502 print(f\"Launching {'API server' if '--nowebui' in sys.argv else 'W \u2502\r\n\u2502 \u2771 340 \u2502 import webui \u2502\r\n\u2502 341 \u2502 if '--nowebui' in sys.argv: \u2502\r\n\u2502 342 \u2502 \u2502 webui.api_only() \u2502\r\n\u2502 343 \u2502 else: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/webui.py:42 in \u2502\r\n\u2502 \u2502\r\n\u2502 39 startup_timer.record(\"import ldm\") \u2502\r\n\u2502 40 \u2502\r\n\u2502 41 from modules import extra_networks \u2502\r\n\u2502 \u2771 42 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, \u2502\r\n\u2502 43 \u2502\r\n\u2502 44 # Truncate version number of nightly/local build of PyTorch to not cau \u2502\r\n\u2502 45 if \".dev\" in torch.__version__ or \"+git\" in torch.__version__: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/call_queue.py:5 in \u2502\r\n\u2502 \u2502\r\n\u2502 2 import threading \u2502\r\n\u2502 3 import time \u2502\r\n\u2502 4 \u2502\r\n\u2502 \u2771 5 from modules import shared, progress, errors \u2502\r\n\u2502 6 \u2502\r\n\u2502 7 queue_lock = threading.Lock() \u2502\r\n\u2502 8 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/shared.py:18 in \u2502\r\n\u2502 \u2502\r\n\u2502 15 import modules.devices as devices \u2502\r\n\u2502 16 from modules import localization, script_loading, errors, ui_component \u2502\r\n\u2502 17 from modules.paths_internal import models_path, script_path, data_path \u2502\r\n\u2502 \u2771 18 from ldm.models.diffusion.ddpm import LatentDiffusion \u2502\r\n\u2502 19 from typing import Optional \u2502\r\n\u2502 20 \u2502\r\n\u2502 21 demo = None \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/repositories/stable-diffusion-stability-ai/ldm/model \u2502\r\n\u2502 s/diffusion/ddpm.py:20 in \u2502\r\n\u2502 \u2502\r\n\u2502 17 import itertools \u2502\r\n\u2502 18 from tqdm import tqdm \u2502\r\n\u2502 19 from torchvision.utils import make_grid \u2502\r\n\u2502 \u2771 20 from pytorch_lightning.utilities.distributed import rank_zero_only \u2502\r\n\u2502 21 from omegaconf import ListConfig \u2502\r\n\u2502 22 \u2502\r\n\u2502 23 from ldm.util import log_txt_as_img, exists, default, ismap, isimage, \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'\n```\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c', 'files': [{'path': 'extensions-builtin/LDSR/sd_hijack_ddpm_v1.py', 'Loc': {'(None, None, None)': {'mod': [17]}}, 'status': 'modified'}, {'path': 'modules/models/diffusion/ddpm_edit.py', 'Loc': {'(None, None, None)': {'mod': [22]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/models/diffusion/ddpm_edit.py", "extensions-builtin/LDSR/sd_hijack_ddpm_v1.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "ef4c94e1cfe66299227aa95a28c2380d21cb1600", "is_iss": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3902", "iss_label": "", "title": "[Feature Request]: ", "body": "Finer control of CFG Scale? now it goes by 0.5 steps. I'm trying to replicate work i did on other app which have CFG scale control by 0.1. i cannot get the same result, of course. \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ui-config.json"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": ["ui-config.json"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "bf30673f5132c8f28357b31224c54331e788d3e7", "is_iss": 0, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3301", "iss_label": "bug-report", "title": "Expected all tensors to be on the same device", "body": "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)\r\n\r\nhow to pick the CUDA:0 \uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'bf30673f5132c8f28357b31224c54331e788d3e7', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 17)': {'mod': [17]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "39919c40dd18f5a14ae21403efea1b0f819756c7", "is_iss": 0, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2190", "iss_label": "bug-report", "title": "How to use .ckpt model on repo", "body": "Hello everyone!\r\n\r\nI was able to train a custom model using Dreambooth and I now have a custom ckpt trained on myself. Where do I put this file to be able to use it in this repo?\r\n\r\nI dropped in into models but not sure what to do next?\r\n\r\nAppreciate any help", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '39919c40dd18f5a14ae21403efea1b0f819756c7', 'files': [{'path': 'models/Stable-diffusion', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["models/Stable-diffusion"]}} -{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "556c36b9607e3f4eacdddc85f8e7a78b29476ea7", "is_iss": 1, "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1614", "iss_label": "enhancement", "title": "Feature request: GPU temperature control ", "body": "**Is your feature request related to a problem? Please describe.**\r\nI don't like 85 degrees (Celsius) on my GPU, especially if it lasts more than 30 minutes or even 1 hour\r\n\r\n**Describe the solution you'd like**\r\nIf temp on a GPU is more than {maxTemp} and it lasts {accumulateTempTime} it will pause processing for {cooldownTime} or until it cools to {minTemp}, so my GPU won't end up with exploding\r\n\r\n**Describe alternatives you've considered**\r\nNot pausing, but lowering the activity to a few tens of seconds per step.\r\n\r\n**Additional context**\r\nNot lowering it in hard core, but smartly lowering activity (using sth similar to PID), so the temp will stay at {desiredTemp}\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "w-e-w", "pro": "stable-diffusion-webui-GPU-temperature-protection"}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["stable-diffusion-webui-GPU-temperature-protection"]}} -{"organization": "python", "repo_name": "cpython", "base_commit": "c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba", "is_iss": 0, "iss_html_url": "https://github.com/python/cpython/issues/39472", "iss_label": "docs", "title": "Wrong reference for specific minidom methods", "body": "BPO | [832251](https://bugs.python.org/issue832251)\n--- | :---\nNosy | @freddrake\n\n*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*\n\n
      Show more details

      \n\nGitHub fields:\n```python\nassignee = 'https://github.com/freddrake'\nclosed_at = \ncreated_at = \nlabels = ['docs']\ntitle = 'Wrong reference for specific minidom methods'\nupdated_at = \nuser = 'https://bugs.python.org/nerby'\n```\n\nbugs.python.org fields:\n```python\nactivity = \nactor = 'fdrake'\nassignee = 'fdrake'\nclosed = True\nclosed_date = None\ncloser = None\ncomponents = ['Documentation']\ncreation = \ncreator = 'nerby'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 832251\nkeywords = []\nmessage_count = 3.0\nmessages = ['18799', '18800', '18801']\nnosy_count = 2.0\nnosy_names = ['fdrake', 'nerby']\npr_nums = []\npriority = 'high'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue832251'\nversions = ['Python 2.3']\n```\n\n

      \n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba', 'files': [{'path': 'Doc/lib/xmldomminidom.tex', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\ndoc\u95ee\u9898", "iss_reason": "2\ndoc\u9519\u8bef\uff0c\u4e0d\u662fbug", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Doc/lib/xmldomminidom.tex"]}} -{"organization": "python", "repo_name": "cpython", "base_commit": "5a65c2d43607a5033d7171445848cde21f07d81d", "is_iss": 0, "iss_html_url": "https://github.com/python/cpython/issues/32681", "iss_label": "interpreter-core", "title": ".pyc writing/reading race condition (PR#189)", "body": "BPO | [210610](https://bugs.python.org/issue210610)\n--- | :---\nNosy | @gvanrossum\n\n*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*\n\n
      Show more details

      \n\nGitHub fields:\n```python\nassignee = 'https://github.com/gvanrossum'\nclosed_at = \ncreated_at = \nlabels = ['interpreter-core']\ntitle = '.pyc writing/reading race condition (PR#189)'\nupdated_at = \nuser = 'https://bugs.python.org/anonymous'\n```\n\nbugs.python.org fields:\n```python\nactivity = \nactor = 'gvanrossum'\nassignee = 'gvanrossum'\nclosed = True\nclosed_date = None\ncloser = None\ncomponents = ['Interpreter Core']\ncreation = \ncreator = 'anonymous'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 210610\nkeywords = []\nmessage_count = 4.0\nmessages = ['66', '67', '68', '69']\nnosy_count = 2.0\nnosy_names = ['gvanrossum', 'jhylton']\npr_nums = []\npriority = 'low'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue210610'\nversions = []\n```\n\n

      \n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5a65c2d43607a5033d7171445848cde21f07d81d', 'files': [{'path': 'Doc/library/os.rst', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": ["fcntl.h"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["fcntl.h"], "doc": ["Doc/library/os.rst"], "test": [], "config": [], "asset": []}} -{"organization": "python", "repo_name": "cpython", "base_commit": "adf03c3544084359d89e7a0bc2a5aa0561f1a0f2", "is_iss": 0, "iss_html_url": "https://github.com/python/cpython/issues/68620", "iss_label": "stdlib\nrelease-blocker", "title": "Upgrade windows builds to use OpenSSL 1.0.2c", "body": "BPO | [24432](https://bugs.python.org/issue24432)\n--- | :---\nNosy | @pfmoore, @pitrou, @larryhastings, @giampaolo, @tiran, @tjguk, @benjaminp, @ned-deily, @alex, @bitdancer, @zware, @zooba, @dstufft\n\n*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*\n\n
      Show more details

      \n\nGitHub fields:\n```python\nassignee = 'https://github.com/zooba'\nclosed_at = \ncreated_at = \nlabels = ['library', 'release-blocker']\ntitle = 'Upgrade windows builds to use OpenSSL 1.0.2c'\nupdated_at = \nuser = 'https://github.com/alex'\n```\n\nbugs.python.org fields:\n```python\nactivity = \nactor = 'python-dev'\nassignee = 'steve.dower'\nclosed = True\nclosed_date = \ncloser = 'steve.dower'\ncomponents = ['Library (Lib)']\ncreation = \ncreator = 'alex'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 24432\nkeywords = ['security_issue']\nmessage_count = 29.0\nmessages = ['245173', '245178', '245283', '246116', '246133', '246136', '246143', '246172', '246182', '246185', '246189', '246190', '246195', '246205', '246209', '246210', '246211', '246212', '246213', '246214', '246215', '246216', '246221', '246222', '246224', '246225', '246227', '246228', '246240']\nnosy_count = 15.0\nnosy_names = ['paul.moore', 'janssen', 'pitrou', 'larry', 'giampaolo.rodola', 'christian.heimes', 'tim.golden', 'benjamin.peterson', 'ned.deily', 'alex', 'r.david.murray', 'python-dev', 'zach.ware', 'steve.dower', 'dstufft']\npr_nums = []\npriority = 'release blocker'\nresolution = 'fixed'\nstage = 'resolved'\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue24432'\nversions = ['Python 2.7', 'Python 3.4', 'Python 3.5', 'Python 3.6']\n```\n\n

      \n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'adf03c3544084359d89e7a0bc2a5aa0561f1a0f2', 'files': [{'path': 'PCbuild/get_externals.bat', 'Loc': {'(None, None, 57)': {'mod': [57]}}, 'status': 'modified'}, {'path': 'PCbuild/python.props', 'Loc': {'(None, None, 37)': {'mod': [37]}}, 'status': 'modified'}, {'path': 'PCbuild/readme.txt', 'Loc': {'(None, None, 200)': {'mod': [200]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["PCbuild/readme.txt"], "test": [], "config": ["PCbuild/get_externals.bat", "PCbuild/python.props"], "asset": []}} -{"organization": "python", "repo_name": "cpython", "base_commit": "5198a5c7aa77367765ae03542b561845094ca30d", "is_iss": 0, "iss_html_url": "https://github.com/python/cpython/issues/48435", "iss_label": "type-bug\nstdlib\ntopic-regex", "title": "re module treats raw strings as normal strings", "body": "BPO | [4185](https://bugs.python.org/issue4185)\n--- | :---\nNosy | @gvanrossum, @loewis, @akuchling, @birkenfeld, @ezio-melotti\nFiles |
    • [raw-strings-with-re.txt](https://bugs.python.org/file11868/raw-strings-with-re.txt \"Uploaded as text/plain at 2008-10-23.03:55:27 by @ezio-melotti\"): Interactive Python session with more examples
    • \n\n*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*\n\n
      Show more details

      \n\nGitHub fields:\n```python\nassignee = 'https://github.com/akuchling'\nclosed_at = \ncreated_at = \nlabels = ['expert-regex', 'type-bug', 'library']\ntitle = 're module treats raw strings as normal strings'\nupdated_at = \nuser = 'https://github.com/ezio-melotti'\n```\n\nbugs.python.org fields:\n```python\nactivity = \nactor = 'georg.brandl'\nassignee = 'akuchling'\nclosed = True\nclosed_date = \ncloser = 'georg.brandl'\ncomponents = ['Library (Lib)', 'Regular Expressions']\ncreation = \ncreator = 'ezio.melotti'\ndependencies = []\nfiles = ['11868']\nhgrepos = []\nissue_num = 4185\nkeywords = []\nmessage_count = 8.0\nmessages = ['75133', '75134', '75135', '75760', '77502', '77562', '77575', '78699']\nnosy_count = 5.0\nnosy_names = ['gvanrossum', 'loewis', 'akuchling', 'georg.brandl', 'ezio.melotti']\npr_nums = []\npriority = 'normal'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = 'behavior'\nurl = 'https://bugs.python.org/issue4185'\nversions = ['Python 2.6', 'Python 2.5', 'Python 2.4']\n```\n\n

      \n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5198a5c7aa77367765ae03542b561845094ca30d', 'files': [{'path': 'Doc/library/re.rst', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nor\n3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["Doc/library/re.rst"], "test": [], "config": [], "asset": []}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "ab6bcb4968bef335175c0b01972657961b2b1250", "is_iss": 1, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/565", "iss_label": "", "title": "[BUG/Help] \u4f7f\u7528ptuning\u5fae\u8c03\u65f6\u62a5\u9519RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] ", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nTraceback (most recent call last):\r\n File \"main.py\", line 429, in <module>\r\n main()\r\n File \"main.py\", line 112, in main\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, trust_remote_code=True)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py\", line 679, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1804, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1958, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 205, in __init__\r\n self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 61, in __init__\r\n self.text_tokenizer = TextTokenizer(vocab_file)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 22, in __init__\r\n self.sp.Load(model_path)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/sentencepiece/__init__.py\", line 905, in Load\r\n return self.LoadFromFile(model_file)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/sentencepiece/__init__.py\", line 310, in LoadFromFile\r\n return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)\r\nRuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] \n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n\u4f7f\u7528ptuning\u5fae\u8c03\u65f6\u62a5\u9519\uff0c\u5df2\u7ecf\u662f\u6700\u65b0\u7248\u7684\u6a21\u578b\u6587\u4ef6\u4e86\n\n### Environment\n\n```markdown\nPyTorch 1.11.0\r\nPython 3.8(ubuntu20.04)\r\nCuda 11.3\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ice_text.model"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "801b1bb57690f0a99943f0a80c839b9ee120f3a7", "is_iss": 1, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/388", "iss_label": "", "title": "\u4e3a\u4ec0\u4e48\u4e0d\u80fd\u7528\u5171\u4eabGPU\u5185\u5b58\u5462[Feature] <title>", "body": "### Is your feature request related to a problem? Please describe.\n\n\u4e3a\u4ec0\u4e48\u4e0d\u80fd\u7528\u5171\u4eabGPU\u5185\u5b58\u5462\r\n\u4e13\u75286G\u90fd\u6ee1\u4e86\u4f46\u662f\u5171\u4eabGPU\u5185\u5b58\u4e00\u70b9\u90fd\u6ca1\u52a8\r\n\n\n### Solutions\n\nemm\n\n### Additional context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "Jittor", "pro": "JittorLLMs"}], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["JittorLLMs"]}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "afe08a19ccadc8b238c218b245bb4c1c62598588", "is_iss": 1, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/770", "iss_label": "", "title": "RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] ", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u8fd0\u884cpython cli_demo.py\u62a5\u9519\r\n\r\nroot@4uot40mdrplpv-0:/yx/ChatGLM-6B# python mycli_demo.py\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nTraceback (most recent call last):\r\n File \"/yx/ChatGLM-6B/mycli_demo.py\", line 6, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained(\"/yx/ChatGLM-6B/THUDM/chatglm-6b\", trust_remote_code=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 679, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 1804, in from_pretrained\r\n return cls._from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 1958, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 205, in __init__\r\n self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 61, in __init__\r\n self.text_tokenizer = TextTokenizer(vocab_file)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 22, in __init__\r\n self.sp.Load(model_path)\r\n File \"/usr/local/lib/python3.11/site-packages/sentencepiece/__init__.py\", line 905, in Load\r\n return self.LoadFromFile(model_file)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/sentencepiece/__init__.py\", line 310, in LoadFromFile\r\n return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] \r\n\r\n\u6211\u662f\u5728docker\u4e2d\u8fd0\u884c\u7684, \u9ebb\u70e6\u770b\u770b\u662f\u600e\u4e48\u56de\u4e8b, \u8c22\u8c22\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nhelp\n\n### Environment\n\n```markdown\n- OS:Red Hat 4.8.5-44\r\n- Python:3.11\r\n- Transformers:4.27.1\r\n- PyTorch:2.0\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :False\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ice_text.model"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "d11eb5213e3c17225b47bb806a120dd45a80b126", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/63", "iss_label": "", "title": "How to fix error like this: torch.cuda.OutOfMemoryError: CUDA out of memory ?", "body": "OS: ubuntu 20.04\r\nThe error message said we need to change value of max_split_size_mb, but I search source code and cannot find any file contains max_split_size_mb, would you please provide some guidance about how to fix?\r\n```\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8/8 [00:16<00:00, 2.09s/it]\r\nTraceback (most recent call last):\r\n File \"/home/zhangclb/sandbox/ai_llm/ChatGLM-6B/cli_demo.py\", line 6, in <module>\r\n model = AutoModel.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True).half().cuda()\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 749, in cuda\r\n return self._apply(lambda t: t.cuda(device))\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n [Previous line repeated 2 more times]\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 664, in _apply\r\n param_applied = fn(param)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 749, in <lambda>\r\n return self._apply(lambda t: t.cuda(device))\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 1.83 GiB total capacity; 1.27 GiB already allocated; 57.19 MiB free; 1.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd11eb5213e3c17225b47bb806a120dd45a80b126', 'files': [{'path': 'cli_demo.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["cli_demo.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "a9fc0184446fba7f4f27addf519fea0b371df83a", "is_iss": 1, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/417", "iss_label": "", "title": "[Help] <title> Oracle Linux 7.9 \u8fd0\u884cint4\u6a21\u578b\u51fa\u9519\uff0cAttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n/x/home/chatglm_env/lib/python3.7/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version!\r\n RequestsDependencyWarning)\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\n/x/home/chatglm_env/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory\r\n warn(f\"Failed to load image Python extension: {e}\")\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nNo compiled kernel found.\r\nCompiling kernels : /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.c\r\nCompiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.c -shared -o /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.so\r\nsh: gcc: command not found\r\nCompile failed, using default cpu kernel code.\r\nCompiling gcc -O3 -fPIC -std=c99 /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.c -shared -o /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.so\r\nKernels compiled : /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.so\r\nCannot load cpu kernel, don't use quantized model on cpu.\r\nUsing quantization cache\r\nApplying quantization to glm layers\r\nTraceback (most recent call last):\r\n File \"chatglm-int4-demo.py\", line 8, in <module>\r\n response, history = model.chat(tokenizer, '\u4f60\u597d', history=[])\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 1137, in chat\r\n outputs = self.generate(**input_ids, **gen_kwargs)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/transformers/generation/utils.py\", line 1447, in generate\r\n **model_kwargs,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/transformers/generation/utils.py\", line 2447, in sample\r\n output_hidden_states=output_hidden_states,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 1051, in forward\r\n return_dict=return_dict,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 887, in forward\r\n output_attentions=output_attentions\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 588, in forward\r\n output_attentions=output_attentions\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 406, in forward\r\n mixed_raw_layer = self.query_key_value(hidden_states)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 334, in forward\r\n output = W8A16LinearCPU.apply(input, self.weight, self.weight_scale, self.weight_bit_width, self.quantization_cache)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 74, in forward\r\n weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 256, in extract_weight_to_float\r\n func = cpu_kernels.int4WeightExtractionFloat\r\nAttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat'\r\n\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\nmodel_path = '/x/home/chatglm-6b-int4'\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\r\nmodel = AutoModel.from_pretrained(model_path, trust_remote_code=True).float()\r\n\r\nresponse, history = model.chat(tokenizer, '\u4f60\u597d', history=[])\r\n\n\n### Environment\n\n```markdown\n- OS: Oracle 7.9\r\n- Python: 3.7.13\r\n- Transformers: 2.6.1\r\n- PyTorch: 1.13.1\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : no cuda, use cpu\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "gcc"}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "\u5e93"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gcc"]}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "0c6d1750ef6042338534c3c97002175fa1ae0499", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/10", "iss_label": "question", "title": "\u53ef\u4ee5\u4f7f\u7528\u81ea\u5df1\u7684\u6570\u636e\u5fae\u8c03\u5417", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0c6d1750ef6042338534c3c97002175fa1ae0499', 'files': [{'path': 'ptuning/', 'Loc': {}}, {'path': 'ptuning/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ptuning/"]}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "c55ecd89a079b86620cc722f2e21a14e3718d0f3", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/39", "iss_label": "", "title": "6GB\u663e\u5361\u63d0\u793a\u663e\u5b58\u4e0d\u8db3", "body": "\u663e\u5361\uff1a3060laptop 6GB\r\n\u62a5\u9519\uff1aRuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 6.00 GiB total capacity; 5.27 GiB already allocated; 0 bytes free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c55ecd89a079b86620cc722f2e21a14e3718d0f3', 'files': [{'path': 'web_demo.py', 'Loc': {'(None, None, None)': {'mod': [5]}}, 'status': 'modified'}, {'path': 'cli_demo.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["web_demo.py", "cli_demo.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "1d87dac585c8fafd708db16860b628928ec5a821", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/532", "iss_label": "", "title": "[BUG/Help] \u8fd9\u4e24\u5929\u66f4\u65b0\u7248\u672c\u540e\uff0cchat\u7684\u5fae\u8c03\u597d\u50cf\u7528\u4e0d\u4e86\u4e86", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u524d\u51e0\u5929\u4f7f\u7528chat\u5fae\u8c03\u8fd8\u662f\u53ef\u4ee5\u7528\u7684\uff0c\u90a3\u65f6\u5019output\u6587\u4ef6\u662f\u5b8c\u6574\u7684\u5305\uff0c\u800c\u4e0d\u662f\u589e\u91cf\u5fae\u8c03\u5305\u3002\r\n\u8fd9\u4e24\u5929\u66f4\u65b0\u540e\uff0c\u4f7f\u7528\u7684\u8fd8\u662f\u9879\u76ee\u81ea\u5e26\u7684train_chat.sh\uff0c\u6a21\u578b\u7528\u7684\u662fint4\u3002\r\noutput\u6587\u4ef6\u786e\u5b9e\u5c0f\u4e86\uff0c\u4f46\u662f\u5374\u65e0\u6cd5\u8fd0\u884c\u4e86\uff0c\u5177\u4f53\u5f62\u5f0f\u4e3a\u8fd0\u884c\u4ee5\u4e0b\u4ee3\u7801\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModel\r\ntokenizer = AutoTokenizer.from_pretrained(\"/content/ChatGLM-6B/ptuning/output/chattm/checkpoint-50\", trust_remote_code=True)\r\nmodel = AutoModel.from_pretrained(\"/content/ChatGLM-6B/ptuning/output/chattm/checkpoint-50\", trust_remote_code=True).half().cuda()\r\nmodel = model.eval()\r\nresponse, history = model.chat(tokenizer, \"\u4f60\u597d\", history=[])\r\nprint(response)\r\n```\r\n\u62a5\u4ee5\u4e0b\u5185\u5bb9\u540e\u65e0\u53cd\u5e94\uff0c\u81f3\u5c115\u5206\u949f\u3002\u671f\u95f4\u663e\u5b58\u4e00\u76f4\u5728\u4e0a\u5347\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nThe dtype of attention mask (torch.int64) is not bool\r\n\u6700\u7ec8\u62a5\u9519\r\n2023-04-11 13:51:41.577016: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n-\n\n### Environment\n\n```markdown\ncolab pro \u9ed8\u8ba4\u73af\u5883\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1d87dac585c8fafd708db16860b628928ec5a821', 'files': [{'path': 'ptuning/main.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ptuning/main.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "edb127326a2d5afd855484f12a38e0119151f826", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/723", "iss_label": "", "title": "centos\u4e0a\uff0c2\u4e2a12g\u663e\u5b58\u7684\u663e\u5361\u5982\u4f55\u914d\u7f6e\u53ef\u4ee5\u540c\u65f6\u4f7f\u7528", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\ncentos\u4e0a\uff0c2\u4e2a12g\u663e\u5b58\u7684\u663e\u5361\uff0c\u65e0\u8bba\u662f\u8bad\u7ec3\u8fd8\u662fweb\uff0c\u90fd\u59cb\u7ec8\u75280\u53f7\u663e\u5361\uff0c\u5982\u4f55\u914d\u7f6e\u53ef\u4ee5\u540c\u65f6\u4f7f\u7528\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nCentos7\r\n12G nvida *2\n\n### Environment\n\n```markdown\n- OS:Centos7\r\n- Python:3.8\r\n- Transformers:4.26.1\r\n- PyTorch: 1.12\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'edb127326a2d5afd855484f12a38e0119151f826', 'files': [{'path': 'ptuning/train.sh', 'Loc': {'(None, None, 4)': {'mod': [4]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nOther \u811a\u672c"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ptuning/train.sh"]}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "801b1bb57690f0a99943f0a80c839b9ee120f3a7", "is_iss": 1, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/394", "iss_label": "", "title": "[BUG/Help] ValueError: 150000 is not in list", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n 0%| | 19/30000 [31:30<828:54:23, 99.53s/it]\r\n 0%| | 20/30000 [33:09<828:37:17, 99.50s/it]\r\n 0%| | 21/30000 [34:48<828:09:42, 99.45s/it]Traceback (most recent call last):\r\n File \"/root/projects/ChatGLM-6B/ptuning/main.py\", line 393, in <module>\r\n main()\r\n File \"/root/projects/ChatGLM-6B/ptuning/main.py\", line 332, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 1633, in train\r\n return inner_training_loop(\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 1902, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 2645, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 2677, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 1160, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 928, in forward\r\n mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 928, in <listcomp>\r\n mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]\r\nValueError: 150000 is not in list\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nPRE_SEQ_LEN=8\r\nLR=1e-2\r\n\r\nCUDA_VISIBLE_DEVICES=0 python3 main.py \\\r\n --do_train \\\r\n --train_file ../data/train.json \\\r\n --validation_file ../data/dev.json \\\r\n --prompt_column instruction \\\r\n --response_column output \\\r\n --overwrite_cache \\\r\n --model_name_or_path ~/projects/zero_nlp/simple_thu_chatglm6b/thuglm/ \\\r\n --output_dir output/adgen-chatglm-6b-pt-$PRE_SEQ_LEN-$LR \\\r\n --overwrite_output_dir \\\r\n --max_source_length 64 \\\r\n --max_target_length 64 \\\r\n --per_device_train_batch_size 100 \\\r\n --per_device_eval_batch_size 100 \\\r\n --gradient_accumulation_steps 16 \\\r\n --predict_with_generate \\\r\n --max_steps 30000 \\\r\n --logging_steps 100 \\\r\n --save_steps 100 \\\r\n --learning_rate $LR \\\r\n --pre_seq_len $PRE_SEQ_LEN \\\r\n # --quantization_bit 4\r\n\r\n\r\n\n\n### Environment\n\n```markdown\n- OS: centos8\r\n- Python: 3.9\r\n- Transformers: 4.27.1\r\n- PyTorch:2.0.0\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ice_text.model", "modeling_chatglm.py"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\n2", "info_type": "Code"}, "loctype": {"code": ["modeling_chatglm.py"], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "1047e446e5387aa06c856c95800f67beab8b80d4", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/224", "iss_label": "", "title": "[BUG/Help] ImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n>>> model = AutoModel.from_pretrained(\"THUDM/chatglm-6b-int4\",trust_remote_code=True).float()\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 456, in from_pretrained\r\n logger.warning(\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\dynamic_module_utils.py\", line 374, in get_class_from_dynamic_module\r\n\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\dynamic_module_utils.py\", line 147, in get_class_in_module\r\n def get_class_in_module(class_name, module_path):\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\importlib\\__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"C:\\Users\\mina_/.cache\\huggingface\\modules\\transformers_modules\\THUDM\\chatglm-6b-int4\\dac03c3ac833dab2845a569a9b7f6ac4e8c5dc9b\\modeling_chatglm.py\", line 30, in <module>\r\n from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\generation\\utils.py\", line 39, in <module>\r\n from .configuration_utils import GenerationConfig\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\generation\\configuration_utils.py\", line 24, in <module>\r\n from ..utils import (\r\nImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils' (C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\utils\\__init__.py)\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n1. `conda activate chatglm-6b`\r\n2. `from transformers import AutoTokenizer, AutoModel`\r\n3. `tokenizer = AutoTokenizer.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True)`\r\n4. `model = AutoModel.from_pretrained(\"THUDM/chatglm-6b-int4\",trust_remote_code=True).float()`\r\n5. See this issue.\n\n### Environment\n\n```markdown\n- OS: Windows 10\r\n- Python: 3.7.5\r\n- Transformers:\r\n- PyTorch:\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : False\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1047e446e5387aa06c856c95800f67beab8b80d4', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "b65142b5e54e52b27c1c1269e1b4abd83efcce45", "is_iss": 1, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/422", "iss_label": "", "title": "[BUG/Help] <title>KeyError: 'lm_head.weight'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u62a5\u9519\uff1aKeyError: 'lm_head.weight'\n\n### Expected Behavior\n\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\n\r\nLoading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]\r\nLoading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Administrator\\Downloads\\ChatGLM-6B-main\\cli_demo.py\", line 7, in <module>\r\n model = AutoModel.from_pretrained(r\"C:\\Users\\Administrator\\Downloads\\ChatGLM-6B-main\\model\",trust_remote_code=True,ignore_mismatched_sizes=True).half().quantize(4).cuda()\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 466, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2646, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2959, in _load_pretrained_model\r\n mismatched_keys += _find_mismatched_keys(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2882, in _find_mismatched_keys\r\n and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape\r\nKeyError: 'lm_head.weight'\r\n\n\n### Steps To Reproduce\n\n\u62a5\u9519\uff1aKeyError: 'lm_head.weight'\n\n### Environment\n\n```markdown\n- OS:windows 10\r\n- Python:3.10\r\n- Transformers:4.27.1\r\n- PyTorch:cu118\r\n- CUDA Support True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["pytorch_model-00001-of-00008.bin", "pytorch_model-00008-of-00008.bin"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Models/\u6570\u636e"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["pytorch_model-00001-of-00008.bin", "pytorch_model-00008-of-00008.bin"]}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "8633db1503fc3b0edc1d035f64aa35dce5d97969", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/622", "iss_label": "", "title": "[BUG/Help] ptuning\u65f6\uff0c\u6307\u5b9aPRE_SEQ_LEN=512\uff0c\u8bad\u7ec3\u540e\uff0c\u56de\u7b54\u7684\u95ee\u9898\u4ecd\u65e7\u6709\u56de\u7b54\u4e00\u767e\u5b57\u5de6\u53f3\u5c31\u65ad\u4e86\uff0c\u8be5\u5982\u4f55\u8c03\u6574\uff1f", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u8bad\u7ec3\u53c2\u6570\u5982\u4e0b\uff1a\r\nPRE_SEQ_LEN=512\r\nLR=2e-2\r\n\r\nCUDA_VISIBLE_DEVICES=0 python3 main.py \\\r\n --do_train \\\r\n --train_file ./data/gwddc.json \\\r\n --validation_file ./data/gwddc_test.json \\\r\n --prompt_column instruction \\\r\n --response_column output \\\r\n --overwrite_cache \\\r\n --model_name_or_path THUDM/chatglm-6b \\\r\n --output_dir output/adgen-chatglm-6b-pt-gwddc-v3 \\\r\n --overwrite_output_dir \\\r\n --max_source_length 64 \\\r\n --max_target_length 64 \\\r\n --per_device_train_batch_size 4 \\\r\n --per_device_eval_batch_size 1 \\\r\n --gradient_accumulation_steps 16 \\\r\n --predict_with_generate \\\r\n --max_steps 3000 \\\r\n --logging_steps 10 \\\r\n --save_steps 1000 \\\r\n --learning_rate $LR \\\r\n --pre_seq_len $PRE_SEQ_LEN\r\n\r\n\u8bad\u7ec3\u6210\u529f\uff0c\u52a0\u8f7dcheckpoint\u6a21\u578b\u4e5f\u6210\u529f\uff0c\u8f93\u5165prompts\u4e5f\u80fd\u6b63\u5e38\u56de\u7b54\uff0c\u53ef\u662f\uff0c\u56de\u7b54\u957f\u5ea6\u4ecd\u65e7\u5f88\u77ed\uff0c\u8fd8\u4f1a\u51fa\u73b0\u56de\u7b54\u534a\u622a\u65ad\u6389\u7684\u60c5\u51b5\uff0c\u8bf7\u95ee\u8be5\u5982\u4f55\u8c03\u6574\u8bad\u7ec3\u53c2\u6570\uff1f\r\n\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n1. ./data/gwddc.json\u4e3a\u81ea\u5907\u7684\u8bad\u7ec3\u96c6\uff0cprompts\u53ea\u6709\u4e0d\u52302000\u6761\r\n2. \u8f93\u5165\u4e0a\u8ff0\u53c2\u6570\u5e76\u8fd0\u884c\uff0c\u8bad\u7ec3\u7ed3\u679c\u4fe1\u606f\u5982\u4e0b\uff1a\r\n\u2026\u2026\r\n{'loss': 0.0371, 'learning_rate': 0.0, 'epoch': 96.77}\r\nSaving PrefixEncoder\r\n{'train_runtime': 21212.1807, 'train_samples_per_second': 9.051, 'train_steps_per_second': 0.141, 'train_loss': 0.2381483610868454, 'epoch': 96.77}\r\n***** train metrics *****\r\n epoch = 96.77\r\n train_loss = 0.2381\r\n train_runtime = 5:53:32.18\r\n train_samples = 1982\r\n train_samples_per_second = 9.051\r\n train_steps_per_second = 0.141\r\n\u5e2e\u770b\u662f\u4e0d\u662ftrain_loss\u4e0d\u884c\uff1f\u9700\u8981\u589e\u52a0\u8fed\u4ee3\u6b21\u6570\uff1f\n\n### Environment\n\n```markdown\n- OS:centos 7.6\r\n- Python:3.9\r\n- Transformers:4.27.1\r\n- PyTorch:2.0.0+cu117\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :True\n```\n\n\n### Anything else?\n\n\u53e6\u5916\uff0c\u7279\u522b\u8bad\u7ec3\u4e86\u201c\u4f60\u662f\u8c01\u201d\uff0c\u90e8\u7f72\u540e\uff0c\u4e5f\u6ca1\u751f\u6548\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8633db1503fc3b0edc1d035f64aa35dce5d97969', 'files': [{'path': 'ptuning/README.md', 'Loc': {'(None, None, 180)': {'mod': [180]}}, 'status': 'modified'}, {'path': 'ptuning/arguments.py', 'Loc': {\"('DataTrainingArguments', None, 65)\": {'mod': [123]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["ptuning/arguments.py"], "doc": ["ptuning/README.md"], "test": [], "config": [], "asset": []}} -{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "a14bc1d32452d92613551eb5d523e00950913710", "is_iss": 0, "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/353", "iss_label": "enhancement", "title": "[Help] \u5982\u4f55\u652f\u6301\u591a\u663e\u5361", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u516c\u53f8\u5185\u90e8\u4f7f\u7528\uff0c\u88c5\u4e862\u5361\uff0c\u53d1\u73b0\u9ed8\u8ba4\u914d\u7f6e\u53ea\u67091\u5361\u5728\u8dd1\uff0c\u8bf7\u95ee\u5982\u4f55\u4f7f\u7528\u624d\u53ef\u4ee5\u4f7f\u7528\u591a\u5361\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n\u65e0\n\n### Environment\n\n```markdown\nOS: Ubuntu 20.04\r\nPython: 3.8\r\nTransformers: 4.26.1\r\nPyTorch: 1.12\r\nCUDA Support: True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a14bc1d32452d92613551eb5d523e00950913710', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u5982\u4f55\u652f\u6301\u591a\u663e\u5361", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "34f28b2a1342fd72c2e4d4e5613855bfb9f35d34", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/1225", "iss_label": "wontfix", "title": "Bert output last hidden state", "body": "## \u2753 Questions & Help\r\n\r\nHi,\r\n\r\nSuppose we have an utterance of length 24 (considering special tokens) and we right-pad it with 0 to max length of 64.\r\nIf we use Bert pertained model to get the last hidden states, the output would be of size [1, 64, 768]. \r\nCan we use just the first 24 as the hidden states of the utterance? I mean is it right to say that the output[0, :24, :] has all the required information?\r\nI realized that from index 24:64, the outputs has float values as well.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '34f28b2a1342fd72c2e4d4e5613855bfb9f35d34', 'files': [{'path': 'src/transformers/models/bert/modeling_bert.py', 'Loc': {\"('BertSelfAttention', 'forward', 276)\": {'mod': [279]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/models/bert/modeling_bert.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "82c7e879876822864b5ceaf2c99eb01159266bcd", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/27200", "iss_label": "", "title": "dataset download error in speech recognition examples", "body": "### System Info\n\n- `transformers` version: 4.35.0.dev0\r\n- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.18\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.24.1\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.10.0+cu111 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\n\n### Who can help?\n\n@stevhliu and @MKhalusova\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nCUDA_VISIBLE_DEVICES=0 python run_speech_recognition_ctc.py \\\r\n\t--dataset_name=\"common_voice\" \\\r\n\t--model_name_or_path=\"facebook/wav2vec2-large-xlsr-53\" \\\r\n\t--dataset_config_name=\"tr\" \\\r\n\t--output_dir=\"./wav2vec2-common_voice-tr-demo\" \\\r\n\t--overwrite_output_dir \\\r\n\t--num_train_epochs=\"15\" \\\r\n\t--per_device_train_batch_size=\"16\" \\\r\n\t--gradient_accumulation_steps=\"2\" \\\r\n\t--learning_rate=\"3e-4\" \\\r\n\t--warmup_steps=\"500\" \\\r\n\t--evaluation_strategy=\"steps\" \\\r\n\t--text_column_name=\"sentence\" \\\r\n\t--length_column_name=\"input_length\" \\\r\n\t--save_steps=\"400\" \\\r\n\t--eval_steps=\"100\" \\\r\n\t--layerdrop=\"0.0\" \\\r\n\t--save_total_limit=\"3\" \\\r\n\t--freeze_feature_encoder \\\r\n\t--gradient_checkpointing \\\r\n\t--chars_to_ignore , ? . ! - \\; \\: \\\" \u201c % \u2018 \u201d \ufffd \\\r\n\t--fp16 \\\r\n\t--group_by_length \\\r\n\t--push_to_hub \\\r\n\t--do_train --do_eval \n\n### Expected behavior\n\nWhen I run the default command, which set `dataset_name` as \"common_voice\", and I got a warning:\r\n```\r\n/home/xintong/.cache/huggingface/modules/datasets_modules/datasets/common_voice/220833898d6a60c50f621126e51fb22eb2dfe5244392c70dccd8e6e2f055f4bf/common_voice.py:634: FutureWarning: \r\n This version of the Common Voice dataset is deprecated.\r\n You can download the latest one with\r\n >>> load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\")\r\n \r\n warnings.warn(\r\nGenerating train split: 0%| | 0/1831 [00:00<?, ? examples/s]\r\nTraceback (most recent call last):\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 2578, in next\r\n tarinfo = self.tarinfo.fromtarfile(self)\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 1283, in fromtarfile\r\n obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 1221, in frombuf\r\n raise TruncatedHeaderError(\"truncated header\")\r\ntarfile.TruncatedHeaderError: truncated header\r\n```\r\nI modified this into `mozilla-foundation/common_voice_11_0`, it passed. \r\n```\r\nDownloading builder script: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8.13k/8.13k [00:00<00:00, 30.3MB/s]\r\nDownloading readme: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 14.4k/14.4k [00:00<00:00, 19.2MB/s]\r\nDownloading extra modules: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.44k/3.44k [00:00<00:00, 19.9MB/s]\r\nDownloading extra modules: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 60.9k/60.9k [00:00<00:00, 304kB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12.2k/12.2k [00:00<00:00, 25.6MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 568M/568M [00:07<00:00, 71.7MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 233M/233M [00:02<00:00, 78.6MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 285M/285M [00:04<00:00, 67.7MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.86M/4.86M [00:00<00:00, 73.3MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 109M/109M [00:01<00:00, 80.4MB/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:21<00:00, 4.24s/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:07<00:00, 1.54s/it]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.76M/5.76M [00:00<00:00, 56.0MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.17M/2.17M [00:00<00:00, 54.1MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.18M/2.18M [00:00<00:00, 64.3MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 32.8k/32.8k [00:00<00:00, 53.1MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 800k/800k [00:00<00:00, 59.8MB/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:05<00:00, 1.01s/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:00<00:00, 2954.98it/s]\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '82c7e879876822864b5ceaf2c99eb01159266bcd', 'files': [{'path': 'examples/pytorch/speech-recognition/README.md', 'Loc': {'(None, None, 69)': {'mod': [69]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["examples/pytorch/speech-recognition/README.md"], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/12081", "iss_label": "", "title": "GPT2 Flax \"TypeError: JAX only supports number and bool dtypes, got dtype object in array\"", "body": "On GPU\r\n\r\n```\r\n>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM\r\n\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"gpt2-medium\")\r\n>>> model = FlaxAutoModelForCausalLM.from_pretrained(\"gpt2-medium\")\r\n>>> input_context = \"The dog\"\r\n>>> # encode input context\r\n>>> input_ids = tokenizer(input_context, return_tensors=\"jax\").input_ids\r\n>>> # generate candidates using sampling\r\n>>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)\r\n\r\nTypeError: JAX only supports number and bool dtypes, got dtype object in array\r\n```\r\n\r\n@patrickvonplaten @patil-suraj ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494', 'files': [{'path': 'src/transformers/models/gpt2/modeling_flax_gpt2.py', 'Loc': {\"('FlaxGPT2LMHeadModule', None, 553)\": {'mod': []}}, 'status': 'modified'}, {'path': 'src/transformers/models/gpt2/tokenization_gpt2_fast.py', 'Loc': {\"('GPT2TokenizerFast', None, 70)\": {'mod': []}}, 'status': 'modified'}, {'Loc': [6, 7], 'path': None}]}", "own_code_loc": [{"Loc": [6, 7], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "src/transformers/models/gpt2/modeling_flax_gpt2.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "322037e842e5e89080918c824998c17722df6f19", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/10079", "iss_label": "", "title": "Unclear error \"NotImplementedError: \"while saving tokenizer. How fix it?", "body": "Here is my tokenizer code and how I save it to a json file\" /content/bert-datas7.json\"\r\n\r\n````\r\nfrom tokenizers import normalizers\r\nfrom tokenizers.normalizers import Lowercase, NFD, StripAccents\r\n\r\nbert_tokenizer.pre_tokenizer = Whitespace()\r\n\r\nfrom tokenizers.processors import TemplateProcessing\r\n\r\nbert_tokenizer.post_processor = TemplateProcessing(\r\n single=\"[CLS] $A [SEP]\",\r\n pair=\"[CLS] $A [SEP] $B:1 [SEP]:1\",\r\n special_tokens=[\r\n (\"[CLS]\", 1),\r\n (\"[SEP]\", 2),\r\n (\"[PAD]\", 3),\r\n ],\r\n \r\n)\r\nfrom tokenizers.trainers import WordPieceTrainer\r\n\r\ntrainer = WordPieceTrainer(\r\n vocab_size=30522, special_tokens=[\"[UNK]\", \"[CLS]\", \"[SEP]\", \"[PAD]\", \"[MASK]\"], pad_to_max_length=True\r\n)\r\nfiles = [f\"/content/For_ITMO.txt\" for split in [\"test\", \"train\", \"valid\"]]\r\nbert_tokenizer.train(trainer, files)\r\n\r\nmodel_files = bert_tokenizer.model.save(\"data\", \"/content/For_ITMO.txt\")\r\n\r\nbert_tokenizer.model = WordPiece.from_file(*model_files, unk_token=\"[UNK]\", pad_to_max_length=True)\r\n\r\nbert_tokenizer.save(\"/content/bert-datas7.json\") \r\n````\r\n\r\nWhen I output tokenizer name_or_path = nothing is displayed. This is normal?\r\n\r\n\r\n````\r\ntokenizer = PreTrainedTokenizerFast(tokenizer_file='/content/bert-datas7.json')\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\nprint(tokenizer)\r\n>>> PreTrainedTokenizerFast(name_or_path='', vocab_size=1435, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={'pad_token': '[PAD]'})\r\n````\r\nAlso, when I try to save my tokenizer, I get an error without explanation. How can I rewrite the code so that all this???\r\n#9658 \r\n#10039 \r\n[For_ITMO.txt-vocab (1) (1).txt](https://github.com/huggingface/transformers/files/5945659/For_ITMO.txt-vocab.1.1.txt)\r\n \r\n````\r\ntokenizer.save_pretrained(\"/content/tokennizerrrr\")\r\n\r\nNotImplementedError Traceback (most recent call last)\r\n<ipython-input-11-efc48254a528> in <module>()\r\n----> 1 tokenizer.save_pretrained(\"/content/tokennizerrrr\")\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in save_vocabulary(self, save_directory, filename_prefix)\r\n 2042 :obj:`Tuple(str)`: Paths to the files saved.\r\n 2043 \"\"\"\r\n-> 2044 raise NotImplementedError\r\n 2045 \r\n 2046 def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]:\r\n\r\nNotImplementedError: \r\n````\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '322037e842e5e89080918c824998c17722df6f19', 'files': [{'path': 'src/transformers/tokenization_utils_fast.py', 'Loc': {\"('PreTrainedTokenizerFast', '_save_pretrained', 505)\": {'mod': [509]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/tokenization_utils_fast.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "77a257fc210a56f1fd0d75166ecd654cf58111f3", "is_iss": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/8403", "iss_label": "", "title": "[s2s finetune] huge increase in memory demands with --fp16 native amp", "body": "While working on https://github.com/huggingface/transformers/issues/8353 I discovered that `--fp16` causes a 10x+ increase in gpu memory demands.\r\n\r\ne.g. I can run bs=12 w/o `--fp16` \r\n\r\n```\r\ncd examples/seq2seq\r\nexport BS=12; rm -rf distilbart-cnn-12-6; python finetune.py --learning_rate=3e-5 --gpus 1 \\\r\n--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \\\r\n--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \\\r\n--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \\\r\n--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \\\r\n--warmup_steps 500 --output_dir distilbart-cnn-12-6\r\n\r\n```\r\nBut if I add:\r\n```\r\n--fp16\r\n```\r\n\r\n(w/ or w/o `--fp16_opt_level O1`)\r\n\r\nI get OOM even with bs=1 on a 8GB card and it barely manages on a 24GB card - I think the increase in memory demand is more than 10x.\r\n\r\nThe OOM either right away when it does the sanity check step, or after just 10-20 batches - so within a few secs\r\n\r\nThis is with pytorch-1.6. Same goes for pytorch-1.7 and 1.8-nightly.\r\n\r\nI wasn't able to test `--fp16` with pytorch-1.5, since I can't build apex on ubuntu-20.04. Without `--fp16` pytorch-1.5 works the same as pytorch-1.6 gpu memory-wise.\r\n\r\nI tested with pytorch-1.5 + apex and there is no problem there. Memory consumption is about half.\r\n\r\nHere is the table of the batch sizes that fit into a 8gb rtx-1070 (bigger BS leads to an instant OOM):\r\n\r\nbs | version\r\n---|--------\r\n12 | pt15\r\n20 | pt15+fp16\r\n12 | pt16\r\n1 | pt16+fp16\r\n\r\n\r\n\r\nIf you'd like to reproduce the problem here are the full steps:\r\n\r\n```\r\n# prep library\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install -e .[dev]\r\npip install -r examples/requirements.txt\r\ncd examples/seq2seq\r\n\r\n# prep data\r\nwget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz\r\ntar -xzvf cnn_dm_v2.tgz # empty lines removed\r\nmv cnn_cln cnn_dm\r\n\r\n# run\r\nexport BS=12; \r\nrm -rf distilbart-cnn-12-6\r\npython finetune.py --learning_rate=3e-5 --gpus 1 \\\r\n--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \\\r\n--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \\\r\n--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \\\r\n--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \\\r\n--warmup_steps 500 --output_dir distilbart-cnn-12-6 \r\n```\r\n\r\nThis issue is to track the problem and hopefully finding a solution.\r\n\r\n@sshleifer ", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57", "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "pytorch", "pro": "pytorch", "path": ["{'base_commit': '57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57', 'files': [{'path': 'aten/src/ATen/autocast_mode.cpp', 'status': 'modified', 'Loc': {\"(None, 'cached_cast', 67)\": {'mod': [71]}}}, {'path': 'test/test_cuda.py', 'status': 'modified', 'Loc': {\"('TestCuda', None, 92)\": {'add': [2708]}}}]}"]}], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["aten/src/ATen/autocast_mode.cpp"], "doc": [], "test": ["test/test_cuda.py"], "config": [], "asset": ["pytorch"]}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "1a688709b34b10bd372e3e0860c8d39d170ebf53", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/17201", "iss_label": "", "title": "a memory leak in qqp prediction using bart", "body": "### System Info\n\n```shell\n- `transformers` version: 4.19.0.dev0\r\n- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.4.0\r\n- PyTorch version (GPU?): 1.10.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\n```\n\n\n### Who can help?\n\n@sgugger\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nI met the same issue #11011. If not using `--eval_accumulation_steps`, it caused CUDA out of memory. If using it, it caused out of RAM and killed by system.\r\n\r\nI only did prediction on GLUE QQP dataset using bart without fine-tuning. Considering QQP having a large test set (300k), the prediction got slower and slower, and finally got out of memory.\r\n\r\nThis is the script to reproduce:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 python run_glue.py --model_name_or_path facebook/bart-large --task_name qqp --output_dir bart-large_qqp --eval_accumulation_steps 100 --do_predict --per_device_eval_batch_size 24\r\n```\n\n### Expected behavior\n\n```shell\nPrediction without out memory.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1a688709b34b10bd372e3e0860c8d39d170ebf53', 'files': [{'path': 'src/transformers/trainer.py', 'Loc': {\"('Trainer', 'evaluation_loop', 2549)\": {'mod': [2635]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2\nOr\n5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/trainer.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/28435", "iss_label": "", "title": "Skip some weights for load_in_8bit and keep them as fp16/32?", "body": "### Feature request\r\n\r\nHello,\r\n\r\nI am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit.\r\n\r\n### Motivation\r\n\r\nMy motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 16 bit because I notice performance degradations with CLIP in 8 bit and also want to be able to train this part without LoRA.\r\n\r\nAs far as I can see in the documentation, issues and with Google (both here and for bitsandbytes), there is currently no way to do this.\r\n\r\n### Your contribution\r\n\r\nI can in theory help implement something like this but I don't know where and how in the code this should be done.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5', 'files': [{'path': 'src/transformers/modeling_utils.py', 'Loc': {\"('PreTrainedModel', 'from_pretrained', 2528)\": {'mod': [3524]}}, 'status': 'modified'}, {'path': 'src/transformers/utils/quantization_config.py', 'Loc': {\"('BitsAndBytesConfig', None, 151)\": {'mod': [176]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/modeling_utils.py", "src/transformers/utils/quantization_config.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "705ca7f21b2b557e0cfd5d0853b297fa53489d20", "is_iss": 1, "iss_html_url": "https://github.com/huggingface/transformers/issues/14938", "iss_label": "", "title": "Question: Object of type EncoderDecoderConfig is not JSON serializable", "body": "Hi.\r\nAn error occurred when I used Trainer to train and save EncoderDecoderModel.\r\n\r\n```python\r\n File \"/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py\", line 482, in <module>\r\n run(model_args, data_args, training_args)\r\n File \"/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py\", line 465, in run\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1391, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1495, in _maybe_log_save_evaluate\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1557, in _save_checkpoint\r\n self.save_model(output_dir)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1961, in save_model\r\n self._save(output_dir)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 2009, in _save\r\n self.model.save_pretrained(output_dir, state_dict=state_dict)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 1053, in save_pretrained\r\n model_to_save.config.save_pretrained(save_directory)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 416, in save_pretrained\r\n self.to_json_file(output_config_file, use_diff=True)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 739, in to_json_file\r\n writer.write(self.to_json_string(use_diff=use_diff))\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 725, in to_json_string\r\n return json.dumps(config_dict, indent=2, sort_keys=True) + \"\\n\"\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 201, in encode\r\n chunks = list(chunks)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 431, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 438, in _iterencode\r\n o = _default(o)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type EncoderDecoderConfig is not JSON serializable\r\n```\r\nMy model and Config define the following code. \r\n```python\r\n tokenizer = RobertaTokenizerFast.from_pretrained(model_args.tokenizer_name)\r\n encoder_config = RobertaConfig.from_pretrained(model_args.encoder_model_name_or_path)\r\n decoder_config = RobertaConfig.from_pretrained(model_args.decoder_model_name_or_path)\r\n encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)\r\n model = RobertaForSeq2Seq.from_encoder_decoder_pretrained(model_args.encoder_model_name_or_path,\r\n model_args.decoder_model_name_or_path,\r\n config=encoder_decoder_config, tie_encoder_decoder=True)\r\n model.config.decoder_start_token_id = tokenizer.bos_token_id\r\n model.config.eos_token_id = tokenizer.eos_token_id\r\n model.config.max_length = 64\r\n model.config.early_stopping = True\r\n model.config.no_repeat_ngram_size = 3\r\n model.config.length_penalty = 2.0\r\n model.config.num_beams = 4\r\n model.config.pad_token_id = tokenizer.pad_token_id\r\n```\r\nThis error occurred because EncoderDecoderConfig cannot be converted to json format. But I don't know how to modify it.\r\n```python\r\nERROR OCCURRED:\r\n\r\n if use_diff is True:\r\n config_dict = self.to_diff_dict()\r\n else:\r\n config_dict = self.to_dict()\r\n return json.dumps(config_dict, indent=2, sort_keys=True) + \"\\n\"\r\n```\r\n\r\nI look forward to your help! Thanks!\r\n @jplu @patrickvonplaten ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [46, 47], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "45d21502f0b67eb8a5ad244d469dcc0dfb7517a7", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/653", "iss_label": "", "title": "Different Results from version 0.4.0 to version 0.5.0", "body": "Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained a new model, so the weights changed? ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '45d21502f0b67eb8a5ad244d469dcc0dfb7517a7', 'files': [{'path': 'pytorch_pretrained_bert/modeling.py', 'Loc': {\"('BertPreTrainedModel', 'init_bert_weights', 515)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["pytorch_pretrained_bert/modeling.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/10202", "iss_label": "", "title": "Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True", "body": "## Environment info\r\n- `transformers` version: 4.3.2\r\n- Platform: macOS-11.2.1-x86_64-i386-64bit\r\n- Python version: 3.9.1\r\n- PyTorch version (GPU?): 1.7.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\n## Information\r\n\r\nSee title; this issue does not reproduce with slow tokenizers. Does not reproduce with serialized tokenizers.\r\n\r\nFound while investigating https://github.com/minimaxir/aitextgen/issues/88\r\n\r\n## To reproduce\r\n\r\nUsing [gpt2_merges.txt](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_merges.txt) and [gpt2_vocab.json](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_vocab.json) as linked:\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, GPT2Tokenizer, GPT2TokenizerFast\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\r\n\r\noutputs = model.generate(max_length=40)\r\n\r\n# tensor([[50256, 383, 471, 13, 50, 13, 2732, 286, 4796, 468,\r\n# 587, 10240, 262, 1918, 286, 257, 1966, 5349, 5797, 508,\r\n# 373, 2823, 290, 2923, 416, 257, 23128, 287, 262, 471,\r\n# 13, 50, 13, 13241, 319, 3583, 13, 198, 198, 198]])\r\n\r\ntokenizer_fast = GPT2TokenizerFast(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_fast.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# '<|endoftext|> The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\ntokenizer_slow = GPT2Tokenizer(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_slow.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# ' The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885', 'files': [{'path': 'src/transformers/tokenization_utils_base.py', 'Loc': {\"('SpecialTokensMixin', 'add_special_tokens', 900)\": {'mod': []}}, 'status': 'modified'}, {'Loc': [33], 'path': None}]}", "own_code_loc": [{"Loc": [33], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "Cment\u6307\u51fa\u7528\u6237\u4ee3\u7801\u95ee\u9898\uff0c\u7ed9\u51fa\u9700\u8981\u4f7f\u7528\u7684API\n\u81ea\u5df1\u4ee3\u7801\u7684\u95ee\u9898 \u53e6\u4e00\u4e2aissue\u4e2d\u6307\u51facmit\nI think this is happening because when you load it from the vocab and merge files, it doesn't know <|endoftext|> is a special token. For the skip_special_tokens to work, I believe it would be necessary to add them to the tokenizer:\ntokenizer_fast.add_special_tokens({\n \"additional_special_tokens\": \"<|endoftext|>\"\n})\n", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/tokenization_utils_base.py", null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "huggingface", "repo_name": "transformers", "base_commit": "5bcbdff15922b1d0eeb035879630ca61c292122a", "is_iss": 0, "iss_html_url": "https://github.com/huggingface/transformers/issues/32661", "iss_label": "bug", "title": "RoBERTa config defaults are inconsistent with fairseq implementation", "body": "### System Info\n\n python 3.12, transformers 4.14, latest mac os\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom transformers import RobertaConfig\r\nmy_config = RobertaConfig()\r\nroberta_config = RobertaConfig.from_pretrained(\"roberta-base\")\r\n\r\nassert (\r\n my_config.max_position_embeddings == roberta_config.max_position_embeddings\r\n), \"%d %d\" % (my_config.max_position_embeddings, roberta_config.max_position_embeddings)\n\n### Expected behavior\n\nThe config defaults should correspond the the base model?\r\n\r\nThis is an implementation detail, but it did send me on a debugging spree as it hid as a sticky CUDA assertion error.\r\n```Assertion `srcIndex < srcSelectDimSize` failed```\r\n\r\nThe problem is that by default if you create the position_ids yourself or if you let transformers roberta_modelling take care of it (it also does it the way fairseq implemented it), it will create indeces that are out of bounds with the default configuration as everything is shifted by pad_token_id.\r\n\r\nThis is more of a heads up. Do transformers generally provide defaults aligned with the original models, or are the defaults here meant to be agnostic of that?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5bcbdff15922b1d0eeb035879630ca61c292122a', 'files': [{'path': 'src/transformers/models/roberta/configuration_roberta.py', 'Loc': {\"('RobertaConfig', None, 29)\": {'mod': [59]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/models/roberta/configuration_roberta.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f0df3144d68ed288f5ccce0c34d3939f8462ba98", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1345", "iss_label": "", "title": "Not able to run any MetaGPT examples", "body": "Referred Issue #1322 , but not able to resolve the issue. I added azure based api endpoint and api key in config2.yaml\r\n\r\n\r\n\u2502 105 \u2502 \u2502 typer.echo(\"Missing argument 'IDEA'. Run 'metagpt --help' for more information.\" \u2502\r\n\u2502 106 \u2502 \u2502 raise typer.Exit() \u2502\r\n\u2502 107 \u2502 \u2502\r\n\u2502 \u2771 108 \u2502 return generate_repo( \u2502\r\n\u2502 109 \u2502 \u2502 idea, \u2502\r\n\u2502 110 \u2502 \u2502 investment, \u2502\r\n\u2502 111 \u2502 \u2502 n_round, \u2502\r\n\u2502 \u2502\r\n\\metagpt\\software_company.py:30 in generate_repo \u2502\r\n\u2502 \u2502\r\n\u2502 27 \u2502 recover_path=None, \u2502\r\n\u2502 28 ) -> ProjectRepo: \u2502\r\n\u2502 29 \u2502 \"\"\"Run the startup logic. Can be called from CLI or other Python scripts.\"\"\" \u2502\r\n\u2502 \u2771 30 \u2502 from metagpt.config2 import config \u2502\r\n\u2502 31 \u2502 from metagpt.context import Context \u2502\r\n\u2502 32 \u2502 from metagpt.roles import ( \u2502\r\n\u2502 33 \u2502 \u2502 Architect, \u2502\r\n\u2502 \u2502\r\n\\new_meta_env\\Lib\\site-packages\\metagpt-0.8.1-py3.11.egg\\metagpt\\ \u2502\r\n\u2502 config2.py:164 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 161 \u2502 return result \u2502\r\n\u2502 162 \u2502\r\n\u2502 163 \u2502\r\n\u2502 \u2771 164 config = Config.default() \u2502\r\n\\new_meta_env\\Lib\\site-packages\\metagpt-0.8.1-py3.11.egg\\metagpt\\ \u2502\r\n\u2502 config2.py:106 in default \u2502\r\n\u2502 \u2502\r\n\u2502 103 \u2502 \u2502 dicts = [dict(os.environ)] \u2502\r\n\u2502 104 \u2502 \u2502 dicts += [Config.read_yaml(path) for path in default_config_paths] \u2502\r\n\u2502 105 \u2502 \u2502 final = merge_dict(dicts) \u2502\r\n\u2502 \u2771 106 \u2502 \u2502 return Config(**final) \u2502\r\n\u2502 107 \u2502 \u2502\r\n\u2502 108 \u2502 @classmethod \u2502\r\n\u2502 109 \u2502 def from_llm_config(cls, llm_config: dict): \u2502\r\n\u2502 \u2502\r\n\\new_meta_env\\Lib\\site-packages\\pydantic\\main.py:176 in __init__ \u2502\r\n\u2502 \u2502\r\n\u2502 173 \u2502 \u2502 \"\"\" \u2502\r\n\u2502 174 \u2502 \u2502 # `__tracebackhide__` tells pytest and some other tools to omit this function fr \u2502\r\n\u2502 175 \u2502 \u2502 __tracebackhide__ = True \u2502\r\n\u2502 \u2771 176 \u2502 \u2502 self.__pydantic_validator__.validate_python(data, self_instance=self) \u2502\r\n\u2502 177 \u2502 \u2502\r\n\u2502 178 \u2502 # The following line sets a flag that we use to determine when `__init__` gets overr \u2502\r\n\u2502 179 \u2502 __init__.__pydantic_base_init__ = True # pyright: ignore[reportFunctionMemberAccess \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nValidationError: 1 validation error for Config\r\nllm\r\n Field required [type=missing, input_value={'ALLUSERSPROFILE': 'C:\\\\..._INIT_AT_FORK': 'FALSE'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.7/v/missing", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f0df3144d68ed288f5ccce0c34d3939f8462ba98', 'files': [{'path': 'config/config2.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "e43aaec9322054f4dec92f44627533816588663b", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/576", "iss_label": "", "title": "\u8bf7\u95eemetagpt\u662f\u5426\u652f\u6301\u5411\u91cf\u6570\u636e\uff0c\u6784\u5efa\u81ea\u5df1\u7684\u77e5\u8bc6\u5e93", "body": "\u8bf7\u95eemetagpt\u662f\u5426\u652f\u6301\u5411\u91cf\u6570\u636e\uff0c\u6784\u5efa\u81ea\u5df1\u7684\u77e5\u8bc6\u5e93", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e43aaec9322054f4dec92f44627533816588663b', 'files': [{'path': '/metagpt/document_store', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["/metagpt/document_store"], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "be56351e000a0f08562820fb04f6fdbe34d9e655", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/205", "iss_label": "", "title": "Rate Limited error", "body": "openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-fK5bb25UFhVbebfBtfCejGc4 on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.\r\n\r\nMaybe a way to resume so all the runtime isn't just lost?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'be56351e000a0f08562820fb04f6fdbe34d9e655', 'files': [{'path': 'metagpt/provider/openai_api.py', 'Loc': {\"('OpenAIGPTAPI', '_achat_completion_stream', 150)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/provider/openai_api.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "fd7feb57fac8d37509b1325cad502d2f65d59956", "is_iss": 1, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1553", "iss_label": "inactive", "title": "ValueError: Creator not registered for key: LLMType.OLLAMA", "body": "**Bug description**\r\n<!-- Clearly and directly describe the current bug -->\r\nI using ***MetaGPT ver 0.8.1*** but when use RAG with method **SimpleEngine.from_docs** have error ***ValueError: Creator not registered for key: LLMType.OLLAMA***\r\n\r\n<!-- **Bug solved method** -->\r\n<!-- If you solved the bug, describe the idea or process to solve the current bug. Of course, you can also paste the URL address of your Pull Request. -->\r\n<!-- If not, provide more auxiliary information to facilitate our further positioning and investigation -->\r\n\r\n**Environment information**\r\n<!-- Environment\uff1aSystem version (like ubuntu 22.04), Python version (conda python 3.7), LLM type and model (OpenAI gpt-4-1106-preview) -->\r\n\r\n- LLM type and model name: ollama and model: hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\r\n- System version:\r\n- Python version: 3.10\r\n- MetaGPT version or branch: 0.8.1\r\n\r\n<!-- Dependent packagess\uff1athe packages version cause the bug(like `pydantic 1.10.8`), installation method\uff08like `pip install metagpt` or `pip install from source` or `run in docker`\uff09 -->\r\n\r\n- packages version:\r\n- installation method: \r\n\r\n**Screenshots or logs**\r\n<!-- Screenshots or logs of the bug can help us understand the problem more quickly -->\r\n***config2.yaml***\r\nembedding:\r\n api_type: \"ollama\"\r\n model: \"hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\"\r\n base_url: \"http://127.0.0.1:11434/api\"\r\n\r\nllm:\r\n api_type: \"ollama\"\r\n model: \"hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\"\r\n base_url: \"http://127.0.0.1:11434/api\"\r\n\r\n***Error Response***\r\n[/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/base.py](https://localhost:8080/#) in get_instance(self, key, **kwargs)\r\n 27 return creator(**kwargs)\r\n 28 \r\n---> 29 raise ValueError(f\"Creator not registered for key: {key}\")\r\n 30 \r\n 31 \r\n\r\nValueError: Creator not registered for key: LLMType.OLLAMA\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"path": "config/config2.yaml", "Loc": [28]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "df8d1124c68b62bb98c71b6071abf5efe6293dba", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/15", "iss_label": "", "title": "\u8bf7\u95ee\u5982\u4f55\u914d\u7f6e\u4f7f\u7528Azure\u4e0a\u7684api\uff1f", "body": "\u4f60\u597d\uff0c \r\n\u6211\u770b\u5230\u6587\u6863\u4e2d\u9700\u8981\u914d\u7f6eopenAI\u7684key\uff0c\u4f46\u662f\u6211\u6ce8\u610f\u5230\u5728provider\u4e2d\u6709azure_api\u7684\u76f8\u5173\u6587\u4ef6,\r\n\u8bf7\u95ee\u662f\u5426\u5728\u54ea\u4e2a\u5730\u65b9\u53ef\u4ee5\u914d\u7f6e\u8ba9\u4ed6\u4f7f\u7528azure\u63d0\u4f9b\u7684\u670d\u52a1\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'df8d1124c68b62bb98c71b6071abf5efe6293dba', 'files': [{'path': 'config/config.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config.yaml"], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "dfa33fcdaade1e4f8019835bf065d372d76724ae", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/924", "iss_label": "", "title": "GLM4\u4e00\u76f4\u62a5\u9519", "body": "2024-02-22 16:50:26.666 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 80.109(s), this was the 5th time calling it. exp: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Full API spec', 'Required Python packages', 'Required Other language third-party packages'} [type=value_error, input_value={'Required JavaScript pac...ation and development.'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.5/v/value_error", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dfa33fcdaade1e4f8019835bf065d372d76724ae', 'files': [{'path': 'config/config2.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "80a189ad4a1546f8c1a9dbe00c42725868c35e5e", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/135", "iss_label": "", "title": "failed to launch chromium browser process errors", "body": "get errors on launch of browser process; below is the error from terminal which happens for all browser processes trying to launch.\r\n\r\n```\r\nINFO | metagpt.utils.mermaid:mermaid_to_file:38 - Generating /Users/lopezdp/DevOps/Ai_MetaGPT/workspace/test_app/resources/competitive_analysis.pdf..\r\n\r\nError: Failed to launch the browser process! spawn /usr/bin/chromium ENOENT\r\n\r\n\r\nTROUBLESHOOTING: https://pptr.dev/troubleshooting\r\n\r\n at ChildProcess.onClose (file:///Users/lopezdp/DevOps/Ai_MetaGPT/node_modules/@puppeteer/browsers/lib/esm/launch.js:253:24)\r\n at ChildProcess.emit (node:events:513:28)\r\n at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)\r\n at onErrorNT (node:internal/child_process:485:16)\r\n at processTicksAndRejections (node:internal/process/task_queues:83:21)\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '80a189ad4a1546f8c1a9dbe00c42725868c35e5e', 'files': [{'path': 'config/puppeteer-config.json', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": ["config/puppeteer-config.json"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1115", "iss_label": "", "title": "The following error appears on every run", "body": "![image](https://github.com/geekan/MetaGPT/assets/115678682/1fb58e0b-47a7-4e1f-a7b7-924ea9adedb0)\r\n\r\n2024-03-27 11:15:59.019 | ERROR | metagpt.utils.common:wrapper:631 - Exception occurs, start to serialize the project, exp:\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\r\n result = fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\repair_llm_raw_output.py\", line 296, in retry_parse_json_text\r\n parsed_data = CustomDecoder(strict=False).decode(output)\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 425, in _aask_v1\r\n parsed_data = llm_output_postprocess(\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31d30 state=finished raised JSONDecodeError>]\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 640, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 550, in run\r\n rsp = await self.react()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31160 state=finished raised RetryError>]\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 626, in wrapper\r\n result = await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\team.py\", line 134, in run\r\n await self.env.run()\r\nException: Traceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\r\n result = fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\repair_llm_raw_output.py\", line 296, in retry_parse_json_text\r\n parsed_data = CustomDecoder(strict=False).decode(output)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 297, in decode\r\n return super().decode(s)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\json\\decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\json\\decoder.py\", line 353, in raw_decode\r\n obj, end = self.scan_once(s, idx)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 65, in scan_once\r\n return _scan_once(string, idx)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 36, in _scan_once\r\n return parse_object((string, idx + 1), strict, _scan_once, object_hook, object_pairs_hook, memo)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 164, in JSONObject\r\n value, end = scan_once(s, end)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 34, in _scan_once\r\n return parse_string(string, idx + 1, strict, delimiter=nextchar)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 227, in py_scanstring\r\n raise JSONDecodeError(\"Unterminated string starting at\", s, begin)\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 425, in _aask_v1\r\n parsed_data = llm_output_postprocess(\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\llm_output_postprocess.py\", line 19, in llm_output_postprocess\r\n result = postprocess_plugin.run(output=output, schema=schema, req_key=req_key)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 68, in run\r\n new_output = self.run_repair_llm_output(output=output, schema=schema, req_key=req_key)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 32, in run_repair_llm_output\r\n parsed_data = self.run_retry_parse_json_text(content)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 47, in run_retry_parse_json_text\r\n parsed_data = retry_parse_json_text(output=content) # should use output=content\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 289, in wrapped_f\r\n return self(f, *args, **kw)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 379, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31d30 state=finished raised JSONDecodeError>]\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 640, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 550, in run\r\n rsp = await self.react()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 517, in react\r\n rsp = await self._react()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 463, in _react\r\n rsp = await self._act()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 392, in _act\r\n response = await self.rc.todo.run(self.rc.history)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 58, in run\r\n doc = await self._update_system_design(filename=filename)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 86, in _update_system_design\r\n system_design = await self._new_system_design(context=prd.content)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 73, in _new_system_design\r\n node = await DESIGN_API_NODE.fill(context=context, llm=self.llm)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 505, in fill\r\n return await self.simple_fill(schema=schema, mode=mode, images=images, timeout=timeout, exclude=exclude)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 457, in simple_fill\r\n content, scontent = await self._aask_v1(\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 88, in async_wrapped\r\n return await fn(*args, **kwargs)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 47, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31160 state=finished raised RetryError>]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d', 'files': [{'path': 'metagpt/strategy/planner.py', 'Loc': {\"('Planner', 'update_plan', 68)\": {'mod': [75]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/strategy/planner.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "bdf9d224b5a05228897553a29214adc074fbc465", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/754", "iss_label": "", "title": "SubscriptionRunner", "body": "import asyncio\r\nfrom metagpt.subscription import SubscriptionRunner\r\nfrom metagpt.roles import Searcher\r\nfrom metagpt.schema import Message\r\n\r\nasync def trigger():\r\n while True:\r\n yield Message(\"the latest news about OpenAI\")\r\n await asyncio.sleep(1)\r\n\r\n\r\nasync def callback(msg: Message):\r\n print(msg.content)\r\n\r\n\r\n# async def main():\r\n# aa = trigger()\r\n# async for i in aa:\r\n# await callback(i)\r\nasync def main():\r\n pd = SubscriptionRunner()\r\n await pd.subscribe(Searcher(), trigger(), callback)\r\n await pd.run()\r\n\r\nasyncio.run(main())\r\n\u5728\u521b\u5efaRunner\u65f6\u5019\u62a5\u9519\uff0c0.6.3\u7248\u672c\r\nTraceback (most recent call last):\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 44, in <module>\r\n asyncio.run(main())\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 190, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\uweih034\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\base_events.py\", line 653, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 40, in main\r\n pd = SubscriptionRunner()\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\main.py\", line 164, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\_internal\\_mock_val_ser.py\", line 47, in __getattr__\r\n raise PydanticUserError(self._error_message, code=self._code)\r\npydantic.errors.PydanticUserError: `SubscriptionRunner` is not fully defined; you should define `Environment`, then call `SubscriptionRunner.model_rebuild()`.\r\n\r\nFor further information visit https://errors.pydantic.dev/2.5/u/class-not-fully-defined", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'bdf9d224b5a05228897553a29214adc074fbc465', 'files': [{'path': 'metagpt/environment.py', 'Loc': {\"('Environment', None, 27)\": {'mod': []}}, 'status': 'modified'}, {'Loc': [21], 'path': None}]}", "own_code_loc": [{"Loc": [21], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "metagpt/environment.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f88fa9e2df09c28f867bda54ec24fa25b50be830", "is_iss": 0, "iss_html_url": "https://github.com/geekan/MetaGPT/issues/178", "iss_label": "", "title": "Specify Directory of pdf documents as Knowledge Base", "body": "Hi, how can we specify any folder which includes pdf documents as a knowledge base and create a new Role of Document Controller to extract specific information from within the documents in KB?\r\n\r\nAny help would be highly appreciated\r\n\r\nThanks much appreciated", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f88fa9e2df09c28f867bda54ec24fa25b50be830', 'files': [{'path': 'metagpt/document_store', 'Loc': {}}, {'path': 'tests/metagpt/document_store', 'Loc': {}}, {'path': 'examples/search_kb.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/search_kb.py"], "doc": ["metagpt/document_store", "tests/metagpt/document_store"], "test": [], "config": [], "asset": []}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7e756b9db56677636e6920c1e6628d13e980aec7", "is_iss": 1, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/6006", "iss_label": "bug", "title": "All custom components throw errors after update to latest version", "body": "### Bug Description\n\n```\n[01/29/25 00:15:00] ERROR 2025-01-29 00:15:00 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405\n <class 'pydantic._internal._model_construction.ModelMetaclass'> \n``` \n\n### Reproduction\n\n1. langflow updated to v1.1.2 from v1.1.1\n2. all previously created custom components throwing error:\n\n[01/29/25 00:24:09] ERROR 2025-01-29 00:24:09 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405\n <class 'pydantic._internal._model_construction.ModelMetaclass'> \n\n### Expected behavior\n\nLangflow should build tool correctly, as on previous version. \n\nSimplified failing code:\n```python\nfrom langflow.custom import Component\nfrom langflow.io import Output\nfrom langflow.schema import Data\nfrom langflow.field_typing import Tool\nfrom langchain.tools import StructuredTool\nfrom pydantic import BaseModel, Field\n\nclass MinimalSchema(BaseModel):\n input_text: str = Field(..., description=\"Text Input\")\n\nclass SimpleToolComponentMinimalSchema(Component):\n display_name = \"Simple Tool Minimal Schema Test\"\n description = \"Component with StructuredTool and minimal schema\"\n outputs = [Output(display_name=\"Tool\", name=\"test_tool\", method=\"build_tool\")]\n\n class MinimalSchema(BaseModel): # Define inner schema\n input_text: str = Field(..., description=\"Text Input\")\n\n def build_tool(self) -> Tool:\n return StructuredTool.from_function( # Return directly - simplified\n name=\"minimal_tool\",\n description=\"Minimal tool for testing schema\",\n func=self.run_tool,\n args_schema=SimpleToolComponentMinimalSchema.MinimalSchema\n )\n\n def run_tool(self, input_text: str) -> str:\n return f\"Tool received: {input_text}\"\n``` \n\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nwsl Ubuntu latest\n\n### Langflow Version\n\n1.1.2\n\n### Python Version\n\n3.12\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [40], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "19818db68b507332be71f30dd90d16bf4c7d6f83", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3718", "iss_label": "enhancement", "title": "Add pgVector in the building instructions for the PostgreSQL Docker image", "body": "### Feature Request\r\n\r\nInclude the pgVector component with the Docker build instructions. This would provide the use with a fully functional PostgreSQL Vector DB, ready to be used inside LangFlow.\r\n\r\n### Motivation\r\n\r\nI am not a programmer, neither I do have proper knowledge of SQL, but I liked to play with some RAG ideas and LangFlow seems perfect. \r\nSo, after installing the Docker version for development of LangFlow, I noticed that the PostgreSQL server is missing the pgVector component, or at least that is what I understood from the error messages. \r\nPerhaps, it would be useful if the pgVector could be included in the Docker container, so having the user to just activate it on the SQL database. Anyway, I might be wrong, so in that case please forgive me.\r\n\r\n### Your Contribution\r\n\r\nAfter looking into the repository and searching around, with the help of AI (of course!), I found that the Docker instructions for the PostgreSQL server are defined inside the file \\docker\\cdk.Dockerfile (hope it's correct), and these might be the instructions to include pgVector:\r\n\r\n```\r\nFROM --platform=linux/amd64 python:3.10-slim\r\n\r\nWORKDIR /app\r\n\r\n# Install Poetry and build dependencies\r\nRUN apt-get update && apt-get install -y \\\r\n gcc \\\r\n g++ \\\r\n curl \\\r\n build-essential \\\r\n git \\\r\n postgresql-server-dev-all \\\r\n && rm -rf /var/lib/apt/lists/*\r\n\r\n# Install Poetry\r\nRUN curl -sSL https://install.python-poetry.org | python3 -\r\n\r\n# Add Poetry to PATH\r\nENV PATH=\"${PATH}:/root/.local/bin\"\r\n\r\n# Copy the pyproject.toml and poetry.lock files\r\nCOPY poetry.lock pyproject.toml ./\r\n\r\n# Copy the rest of the application codes\r\nCOPY ./ ./\r\n\r\n# Install dependencies\r\nRUN poetry config virtualenvs.create false && poetry install --no-interaction --no-ansi\r\n\r\n# Install pgvector extension\r\nRUN git clone https://github.com/pgvector/pgvector.git /tmp/pgvector && \\\r\n cd /tmp/pgvector && \\\r\n make && \\\r\n make install && \\\r\n rm -rf /tmp/pgvector\r\n\r\n# Install additional dependencies\r\nRUN poetry add botocore\r\nRUN poetry add pymysql\r\n\r\n# Command to run your application\r\nCMD [\"sh\", \"./container-cmd-cdk.sh\"]\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '19818db68b507332be71f30dd90d16bf4c7d6f83', 'files': [{'path': 'docker_example/docker-compose.yml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nor\n4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": ["docker_example/docker-compose.yml"], "test": [], "config": [], "asset": []}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "2ddd7735129b0f35fd617f2634d35a3690b06630", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/4528", "iss_label": "bug", "title": "Can't access flow directly by link", "body": "### Bug Description\n\nWhen you try to access a flow using it's URL (ex. http://localhost:55650/flow/0b95342f-6ce4-43d0-9d60-c28bf66a3781), the page doesn't load and in the browser's console is shown the following message: ``Uncaught SyntaxError: Unexpected token '<' (at index-DK9323ab.js:1:1)``. I think that this problem is related to #1182 .\r\n\r\nNavegating through the main page to access this flow works fine. If I reload the page, it doesn't load as described before.\n\n### Reproduction\n\n1. Run the Docker image langflowui/langflow\r\n2. Open the langflow main page\r\n3. Creates a new flow\r\n4. Copy the flow link into a new tab or just reload the page\n\n### Expected behavior\n\nTo open the flow editor page.\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nDocker image (langflowai/langflow) running in K8s\n\n### Langflow Version\n\n1.0.19\n\n### Python Version\n\nNone\n\n### Screenshot\n\nInstead of loading the JS file, is loaded the HTML file as shown in the following picture:\r\n\r\n![image](https://github.com/user-attachments/assets/4a192a11-5389-497f-8898-fd0598684ca1)\r\n\r\nAll requests in this image loads the same HTML.\r\n\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2ddd7735129b0f35fd617f2634d35a3690b06630', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "ed53fcd3b042ecb5ed04c9c4562c459476bd6763", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3896", "iss_label": "bug", "title": "redis.exceptions.ResponseError: unknown command 'module'", "body": "### Bug Description\n\nredis.exceptions.ResponseError: unknown command 'module'\r\n\r\nhttps://github.com/user-attachments/assets/32ea6046-d5f1-4d85-96b5-41d381776986\r\n\r\n\n\n### Reproduction\n\nAdd a redis click run error, see the video\n\n### Expected behavior\n\nResponseError: unknown command 'MODULE'\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nwindows \n\n### Langflow Version\n\n1.0.18\n\n### Python Version\n\n3.11\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ed53fcd3b042ecb5ed04c9c4562c459476bd6763', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7d400903644230a8842ce189ca904ea9f8048b07", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/1239", "iss_label": "bug", "title": "cannot import name 'DEFAULT_CONNECTION_STRING' in v0.6.3a5", "body": "\r\n\r\n```\r\n% git branch\r\n* (HEAD detached at v0.6.3a5)\r\n dev\r\n% cd docker_example \r\n% docker compose up\r\n\r\n[+] Running 1/0\r\n \u2714 Container docker_example-langflow-1 Created 0.0s \r\nAttaching to langflow-1\r\nlangflow-1 | Traceback (most recent call last):\r\nlangflow-1 | File \"/home/user/.local/bin/langflow\", line 5, in <module>\r\nlangflow-1 | from langflow.__main__ import main\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/__init__.py\", line 5, in <module>\r\nlangflow-1 | from langflow.processing.process import load_flow_from_json\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/processing/process.py\", line 10, in <module>\r\nlangflow-1 | from langflow.graph import Graph\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/__init__.py\", line 2, in <module>\r\nlangflow-1 | from langflow.graph.graph.base import Graph\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/graph/base.py\", line 7, in <module>\r\nlangflow-1 | from langflow.graph.graph.constants import lazy_load_vertex_dict\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/graph/constants.py\", line 1, in <module>\r\nlangflow-1 | from langflow.graph.vertex import types\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/vertex/types.py\", line 5, in <module>\r\nlangflow-1 | from langflow.graph.vertex.base import Vertex\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/vertex/base.py\", line 9, in <module>\r\nlangflow-1 | from langflow.interface.initialize import loading\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/interface/initialize/loading.py\", line 17, in <module>\r\nlangflow-1 | from langflow.interface.custom_lists import CUSTOM_NODES\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/interface/custom_lists.py\", line 7, in <module>\r\nlangflow-1 | from langflow.interface.agents.custom import CUSTOM_AGENTS\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/interface/agents/__init__.py\", line 1, in <module>\r\nlangflow-1 | from langflow.interface.agents.base import AgentCreator\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/interface/agents/base.py\", line 5, in <module>\r\nlangflow-1 | from langflow.custom.customs import get_custom_nodes\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/custom/customs.py\", line 1, in <module>\r\nlangflow-1 | from langflow.template import frontend_node\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/template/frontend_node/__init__.py\", line 1, in <module>\r\nlangflow-1 | from langflow.template.frontend_node import (\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/template/frontend_node/memories.py\", line 7, in <module>\r\nlangflow-1 | from langchain.memory.chat_message_histories.postgres import DEFAULT_CONNECTION_STRING\r\nlangflow-1 | ImportError: cannot import name 'DEFAULT_CONNECTION_STRING' from 'langchain.memory.chat_message_histories.postgres' (/home/user/.local/lib/python3.10/site-packages/langchain/memory/chat_message_histories/postgres.py)\r\nlangflow-1 exited with code 1\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7d400903644230a8842ce189ca904ea9f8048b07', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} -{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "12a46b6936e23829d9956d4d5f1fa51faff76137", "is_iss": 0, "iss_html_url": "https://github.com/langflow-ai/langflow/issues/965", "iss_label": "stale", "title": "Method for Dynamically Manipulating Parameters of a Custom Component", "body": "```python\r\nclass DynamicConfigCustomComponent(CustomComponent):\r\n def build_config(self, prev_selection=None):\r\n config = {\r\n \"param1\": {\"display_name\": \"Parameter 1\"},\r\n \"param2\": {\r\n \"display_name\": \"Parameter 2\",\r\n \"options\": [1, 2, 3],\r\n \"value\": 1,\r\n },\r\n }\r\n \r\n if prev_selection is not None:\r\n if prev_selection == 2:\r\n config[\"param3\"] = {\"display_name\": \"Parameter 3\", \"value\": \"hello\"}\r\n \r\n return config\r\n\r\n``` \r\nI want to dynamically change different values depending on the type of component that is input or connected when using a custom component, as shown in the attached code. For example, in Langflow's prompt template, when you change the text, the key value input into that component is dynamically displayed in the list. Is there any way to do this?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '12a46b6936e23829d9956d4d5f1fa51faff76137', 'files': [{'path': 'src/frontend/src/types/components/index.ts', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/types/components/index.ts"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "ad7cefa10c0647feee85114d58559fcf83ba6743", "is_iss": 0, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/1902", "iss_label": "setup", "title": "Error with 'python -m autogpt' command. Please set your OpenAI API key in .env or as an environment variable. You can get your key from https://beta.openai.com/account/api-keys", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\nInstalled the 'stable' version of the program\r\nI run 'python -m autogpt' command and comes up with an error.\r\n\r\n\r\n![Screenshot 2023-04-16 183147](https://user-images.githubusercontent.com/130889399/232320050-2b495403-55e9-4d43-b588-e53172eba533.jpg)\r\n\r\nI have paid Chat GPT and Open AI API accounts.\r\nFor Chat GPT I have access to version 4\r\nFor Open AI API I do not have access to version 4, I am on the version before this.\n\n### Current behavior \ud83d\ude2f\n\nError message ;Please set your OpenAI API key in .env or as an environment variable.\r\nYou can get your key from https://beta.openai.com/account/api-keys'\n\n### Expected behavior \ud83e\udd14\n\nShould load the program as to start commands\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\npython -m autogpt```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ad7cefa10c0647feee85114d58559fcf83ba6743', 'files': [{'path': 'run.sh', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1\n0", "info_type": "Other\n\u73af\u5883\u53d8\u91cf /script shell\u7b49"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["run.sh"]}} -{"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "90e6a55e378bc80352f01eb08122300b4d1a64ec", "is_iss": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/2428", "iss_label": "function: logging", "title": "Add logging of user input of the role and goals", "body": "### Duplicates\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Summary \ud83d\udca1\r\n\r\nNow logs reflect only gpt's response but i don't really remember what exactly i input before. Please log it same as in the console. \r\nCurrent logging makes it a lot harder to debug\r\n\r\n### Examples \ud83c\udf08\r\n```\r\nAll packages are installed.\r\nWelcome back! Would you like me to return to being sc3?\r\nContinue with the last settings?\r\nName: sc3\r\nRole: warhammer 40k writer\r\nGoals: ['research the theme', 'do a 5000 symbols structurized explanation on wh40k lore', 'terminate']\r\nContinue (y/n): n\r\nWelcome to Auto-GPT! run with '--help' for more information.\r\nCreate an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.\r\nName your AI: For example, 'Entrepreneur-GPT'\r\nAI Name: da23eads\r\nda23eads here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\nda23eads is: wh 40k writer\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\nEnter nothing to load defaults, enter nothing when finished.\r\nGoal 1: research the theme\r\nGoal 2: do a plot esplanation on warhammer 40k universe\r\nGoal 3: terminate\r\nGoal 4:\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\n- Thinking...\r\n```\r\n\r\n### Motivation \ud83d\udd26\r\n\r\nmake the world better", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ai_settings.yml"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["ai_settings.yml"], "asset": []}} -{"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "16b7e7a91e7b6c73ddf3e7193cea53f1b45671fa", "is_iss": 1, "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/4218", "iss_label": "setup", "title": "AutoGPT v0.3.1 crashes immediately after task given", "body": "### Which Operating System are you using?\r\n\r\nWindows\r\n\r\n### Which version of Auto-GPT are you using?\r\n\r\nLatest Release v0.3.1\r\n\r\n### GPT-3 or GPT-4?\r\n\r\nGPT-3.5\r\n\r\n### Steps to reproduce \ud83d\udd79\r\n\r\nWelcome to Auto-GPT! run with '--help' for more information.\r\nCreate an AI-Assistant: input '--manual' to enter manual mode.\r\n Asking user via keyboard...\r\nI want Auto-GPT to: Search Big Mac prices in EU countries\r\nUnable to automatically generate AI Config based on user desire. Falling back to manual mode.\r\nCreate an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.\r\nName your AI: For example, 'Entrepreneur-GPT'\r\n Asking user via keyboard...\r\nAI Name: MacGPT\r\nMacGPT here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\n Asking user via keyboard...\r\nMacGPT is: Search for Big Mc prices in EU countries\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\n Enter nothing to load defaults, enter nothing when finished.\r\n Asking user via keyboard...\r\nGoal 1: Conduct a thorough and accurate search of BigMc prices across EU countries\r\n Asking user via keyboard...\r\nGoal 2: Provide price per each EU capital\r\n Asking user via keyboard...\r\nGoal 3: Ensure that the information provided is up-to-date and accurate\r\n Asking user via keyboard...\r\nGoal 4: Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n Asking user via keyboard...\r\nGoal 5: Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nEnter your budget for API calls: For example: $1.50\r\n Enter nothing to let the AI run without monetary limit\r\n Asking user via keyboard...\r\nBudget: $1\r\nMacGPT has been created with the following details:\r\nName: MacGPT\r\nRole: Search for Big Mc prices in EU countries\r\nGoals:\r\n- Conduct a thorough and accurate search of BigMc prices across EU countries\r\n- Provide price per each EU capital\r\n- Ensure that the information provided is up-to-date and accurate\r\n- Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n- Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\makkolev\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\agpt\\autogpt\\__main__.py\", line 5, in <module>\r\n autogpt.cli.main()\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1635, in invoke\r\n rv = super().invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"C:\\agpt\\autogpt\\cli.py\", line 90, in main\r\n run_auto_gpt(\r\n File \"C:\\agpt\\autogpt\\main.py\", line 186, in run_auto_gpt\r\n agent.start_interaction_loop()\r\n File \"C:\\agpt\\autogpt\\agent\\agent.py\", line 113, in start_interaction_loop\r\n assistant_reply = chat_with_ai(\r\n File \"C:\\agpt\\autogpt\\llm\\chat.py\", line 244, in chat_with_ai\r\n assistant_reply = create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\llm_utils.py\", line 166, in create_chat_completion\r\n response = api_manager.create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\api_manager.py\", line 55, in create_chat_completion\r\n response = openai.ChatCompletion.create\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\chat_completion.py\", line 25, in create\r\n return super().create(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\abstract\\engine_api_resource.py\", line 153, in create\r\n response, _, api_key = requestor.request(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 226, in request\r\n resp, got_stream = self._interpret_response(result, stream)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 619, in _interpret_response\r\n self._interpret_response_line(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 682, in _interpret_response_line\r\n raise self.handle_error_response(\r\nopenai.error.AuthenticationError: <empty message>\r\n\r\n### Current behavior \ud83d\ude2f\r\n\r\nCrashes multiple times. Open_API_key has been provided. Restarted virtual environment a couple of times.\r\nNB! Tried to start AutoGPT both with Windows Python3.10 way and via Docker. In both cases can't start start search and receive immediately error (below) - openai.error.AuthenticationError: <empty message>\r\n\r\n### Expected behavior \ud83e\udd14\r\n\r\nStarts correctly\r\n\r\n### Your prompt \ud83d\udcdd\r\n\r\n```AI Name: MacGPT\r\nMacGPT here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\n Asking user via keyboard...\r\nMacGPT is: Search for Big Mc prices in EU countries\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\n Enter nothing to load defaults, enter nothing when finished.\r\n Asking user via keyboard...\r\nGoal 1: Conduct a thorough and accurate search of BigMc prices across EU countries\r\n Asking user via keyboard...\r\nGoal 2: Provide price per each EU capital\r\n Asking user via keyboard...\r\nGoal 3: Ensure that the information provided is up-to-date and accurate\r\n Asking user via keyboard...\r\nGoal 4: Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n Asking user via keyboard...\r\nGoal 5: Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nEnter your budget for API calls: For example: $1.50\r\n Enter nothing to let the AI run without monetary limit\r\n Asking user via keyboard...\r\nBudget: $1\r\nMacGPT has been created with the following details:\r\nName: MacGPT\r\nRole: Search for Big Mc prices in EU countries\r\nGoals:\r\n- Conduct a thorough and accurate search of BigMc prices across EU countries\r\n- Provide price per each EU capital\r\n- Ensure that the information provided is up-to-date and accurate\r\n- Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n- Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\n```\r\n\r\n\r\n### Your Logs \ud83d\udcd2\r\n\r\n```log\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\makkolev\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\agpt\\autogpt\\__main__.py\", line 5, in <module>\r\n autogpt.cli.main()\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1635, in invoke\r\n rv = super().invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"C:\\agpt\\autogpt\\cli.py\", line 90, in main\r\n run_auto_gpt(\r\n File \"C:\\agpt\\autogpt\\main.py\", line 186, in run_auto_gpt\r\n agent.start_interaction_loop()\r\n File \"C:\\agpt\\autogpt\\agent\\agent.py\", line 113, in start_interaction_loop\r\n assistant_reply = chat_with_ai(\r\n File \"C:\\agpt\\autogpt\\llm\\chat.py\", line 244, in chat_with_ai\r\n assistant_reply = create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\llm_utils.py\", line 166, in create_chat_completion\r\n response = api_manager.create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\api_manager.py\", line 55, in create_chat_completion\r\n response = openai.ChatCompletion.create(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\chat_completion.py\", line 25, in create\r\n return super().create(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\abstract\\engine_api_resource.py\", line 153, in create\r\n response, _, api_key = requestor.request(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 226, in request\r\n resp, got_stream = self._interpret_response(result, stream)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 619, in _interpret_response\r\n self._interpret_response_line(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 682, in _interpret_response_line\r\n raise self.handle_error_response(\r\nopenai.error.AuthenticationError: <empty message>\r\nPress any key to continue . . .\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "is_iss": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/5424", "iss_label": "question\nanswered\nquestion-migrate", "title": "How to identify query params with keys only and no value", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\n@router.get(\"/events\")\r\ndef get_alerts(request: Request)\r\n params = request.query_params\n```\n\n\n### Description\n\nI want to handle a use case where I want to handle a use case where if a query param is passed but no value is set, I would return a specific message. I want a different behavior when to when it is not passed at all.\r\n\r\nI tried using request.query_params but it doesn't get the Key in the request as well.\r\n\r\nPostman request looks like this:\r\n<img width=\"805\" alt=\"image\" src=\"https://user-images.githubusercontent.com/104721284/192010955-160c2418-63f3-46ac-9f64-a416b92c03ae.png\">\r\n\r\n\r\n\n\n### Operating System\n\nmacOS\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.70.0\n\n### Python Version\n\n3.9\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/5425", "iss_label": "question\nanswered\nquestion-migrate", "title": "Error while opening swagger docs while uploading file in APIRouter", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nrouter = APIRouter(\r\n prefix='/predict',\r\n tags=[\"Prediction\"],\r\n responses={404: {\"description\": \"Not Found\"}}\r\n)\r\n\r\n\r\n@router.post(\"/\")\r\nasync def predict(file: UploadFile = File(...)):\r\n extension = file.filename.split(\".\")[-1] in (\"jpg\", \"jpeg\", \"png\")\r\n if not extension:\r\n raise HTTPException(status_code=400, detail=\"File Format Error : Uploaded file must be a JPG, JPEG or PNG file\")\r\n image = read_image_file(await file.read())\r\n result = predict_pneumonia(image)\r\n if result > 0.6:\r\n return JSONResponse(content={\"prediction\": \"pneumonia\"})\r\n return JSONResponse(content={\"prediction\": \"no pneumonia\"})\r\n```\r\n\r\n\r\n### Description\r\n\r\nI am just trying to create a ML prediction application using FastAPI. While uploading images, swagger docs doesn't load and its showing the below mentioned error. But the endpoint works perfectly when tried with Postman.\r\n\r\n![image](https://user-images.githubusercontent.com/58306412/192039571-1eed5f98-cd67-49ec-97ec-364b28ace0f9.png)\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 404, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\uvicorn\\middleware\\proxy_headers.py\", line 78, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 270, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\applications.py\", line 124, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 184, in __call__\r\n raise exc\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 75, in __call__\r\n raise exc\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 64, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 21, in __call__\r\n raise e\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 18, in __call__\r\n await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 680, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 275, in handle\r\n await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 65, in app\r\n response = await func(request)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 225, in openapi\r\n return JSONResponse(self.openapi())\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 200, in openapi\r\n self.openapi_schema = get_openapi(\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\openapi\\utils.py\", line 423, in get_openapi\r\n definitions = get_model_definitions(\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\utils.py\", line 39, in get_model_definitions\r\n model_name = model_name_map[model]\r\nKeyError: <class 'pydantic.main.Body_predict_predict__post'>\r\n```\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.85.0\r\n\r\n### Python Version\r\n\r\n3.9\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c6aa28bea2f751a91078bd8d845133ff83f352bf', 'files': [{'path': 'fastapi/routing.py', 'Loc': {\"('APIRouter', 'add_api_route', 513)\": {'mod': [593]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/routing.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "is_iss": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/5422", "iss_label": "question\nquestion-migrate", "title": "Unidirectional websocket connections where only the server pushes data to the clients", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\n@app.websocket(\"/ws\")\r\nasync def websocket_endpoint(websocket: WebSocket):\r\n await websocket.accept()\r\n while True:\r\n data = await websocket.receive_text()\r\n await websocket.send_text(f\"Message text was: {data}\")\n```\n\n\n### Description\n\nHello,\r\nIs there a way I could send data to clients over websocket without listening for when clients send data back. I'm trying to have a websocket endpoint where the server is pushing data to the client in a unidirectional way without the option for the client to send responses back. There doesn't seem to be any code that I could find that supports this since all the documentation seems to require that the server is listening for a `websocket.recieve_text()`. Any help would be much appreciated, thanks.\n\n### Operating System\n\nLinux\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.81.0\n\n### Python Version\n\n3.8.13\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [23], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "55afb70b3717969565499f5dcaef54b1f0acc7da", "is_iss": 1, "iss_html_url": "https://github.com/fastapi/fastapi/issues/891", "iss_label": "question\nanswered\nquestion-migrate", "title": "SQL related tables and corresponding nested pydantic models in async", "body": "Really impressed with FastAPI so far... I have search docs github, tickets and googled the issue described below.\r\n\r\n### Description\r\n\r\nHow best to work with related tables and corresponding nested pydantic models whilst persisting data in a relational database in an async application?\r\n\r\n### Additional context\r\n\r\nI have been attempting to extend the example in the docs \r\nhttps://fastapi.tiangolo.com/advanced/async-sql-databases/\r\nwhich relies on https://github.com/encode/databases\r\n\r\nUsing three test pydantic models as an example:\r\n\r\n```\r\nclass UserModel(BaseModel):\r\n id: int\r\n title: str = Field(..., min_length=2, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n username: str = Field(..., min_length=3, max_length=50)\r\n email: str = Field(..., min_length=3, max_length=50)\r\n favourite_book: int = Field(...)\r\n\r\nclass FavouriteBook(BaseModel):\r\n id: int\r\n title: str = Field(...)\r\n author: str = Field(...)\r\n\r\n\r\nclass ExtendedUser(BaseModel):\r\n id: int\r\n title: str = Field(..., min_length=2, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n username: str = Field(..., min_length=3, max_length=50)\r\n email: str = Field(..., min_length=3, max_length=50)\r\n favourite_book: FavouriteBook\r\n\r\n```\r\n\r\nthe route would ideally be along the lines of...\r\n\r\n```\r\n@router.get(\"/extended\", response_model=List[ExtendedUser])\r\nasync def list():\r\n query = **sqlAlchemy/databases call that works**\r\n return database.fetch_all(query=query)\r\n\r\n```\r\n\r\n\r\nHow can a user create a route that returns the nested ExtendedUser from the database without resorting to performing two queries? \r\nAn SQL join is a standard way to do this with a single query. However, this does not work with SQLAlchemy core as the two tables contain 'id' and 'title' columns. \r\nIt is possible to work with SQLAlchemy orm - but not in an async way as far as I know. (async is my reason for using FastAPI ). I could rename the columns to something unique ( but to rename 'id' column seems like poor database design to me).\r\n\r\n\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [31], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "1760da0efa55585c19835d81afa8ca386036c325", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/3882", "iss_label": "question\nquestion-migrate", "title": "Doing work after the HTTP response has been sent", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\nfrom fastapi import FastAPI, Request\r\n\r\napp = FastAPI()\r\n\r\n@app.middleware(\"http\")\r\nasync def write_log(request: Request, call_next):\r\n response = await call_next(request)\r\n # write log\r\n return response\n```\n\n\n### Description\n\nI want to log data for each request, however since my application is latency sensitive, I would want to return as quickly as possible. Is there an equivalent to Symfony's \"[terminate](https://symfony.com/doc/current/reference/events.html#kernel-terminate)\" event (which I guess is the `request_finished` signal in Django)? The idea is to do the log writing after the HTTP response has been sent.\r\n\r\nThe above code is from the middleware documentation, but it basically means the code for writing the log will be executed before the response is sent.\n\n### Operating System\n\nLinux\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.65.1\n\n### Python Version\n\n3.8.5\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1760da0efa55585c19835d81afa8ca386036c325', 'files': [{'path': 'fastapi/background.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/background.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "a0e4d38bea74940de013e04a6d6f399d62f04280", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/1498", "iss_label": "question\nreviewed\nquestion-migrate", "title": "RedirectResponse from a POST request route to GET request route shows 405 Error code.", "body": "_Summary of the total issue is:_ **How to do a Post/Redirect/Get (PRG) in FastAPI?**\r\n\r\n_This is not necessarily a bug, rather a question._\r\n### Things i tried:\r\nI want to redirect response from 2nd route to 1st route. This [Issue#199](https://github.com/tiangolo/fastapi/issues/199) here explains **GET to GET** but not a **POST to GET**. **N.B:** `I have done this type of POST -> GET redirecting in flask, it was working there but not here.` And also this [Issue#863](https://github.com/tiangolo/fastapi/issues/863) has the same problem but doesn't really solves the problem. To re produce the error check the bottom.\r\n\r\n```Python3\r\n#1st route (GET request)\r\n@admin_content_edit_router.get('/admin/edit_content/set_category')\r\nasync def set_category(request:Request):\r\n return templates.TemplateResponse(\"admin/category_edit.html\", {'request': request})\r\n\r\n#2nd route (POST request)\r\n@admin_content_edit_router.post('/admin/edit_content/add_category')\r\nasync def add_category(request:Request):\r\n # here forms are getting processed\r\n return RedirectResponse(app.url_path_for('set_category')) # from here to 1st route\r\n```\r\nBut it shows :\r\n```Python3\r\n {\"detail\":\"Method Not Allowed\"}\r\n```\r\nFull traceback:\r\n```Python3\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/add_category HTTP/1.1\" 307 Temporary Redirect\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/set_category HTTP/1.1\" 405 Method Not Allowed\r\nERROR: Exception in callback _SelectorSocketTransport._read_ready()\r\nhandle: <Handle _SelectorSocketTransport._read_ready()>\r\nTraceback (most recent call last):\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\events.py\", line 145, in _run\r\n self._callback(*self._args)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 730, in _read_ready\r\n self._protocol.data_received(data)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 162, in data_received\r\n self.handle_events()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 247, in handle_events\r\n self.transport.resume_reading()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 711, in resume_reading\r\n raise RuntimeError('Not paused')\r\nRuntimeError: Not paused\r\n```\r\n\r\nBut when i do a GET to GET redirect response it works without any issue but a POST to GET blows things up! Am i completely missing something here? i did look up in starlette doc here on reverse route lookup but nothing helps. [https://www.starlette.io/routing/#reverse-url-lookups](url)\r\n\r\nQuick Re produce the error:\r\n```Python3\r\n\r\nfrom fastapi import FastAPI\r\nfrom starlette.responses import RedirectResponse\r\nimport os\r\nfrom starlette.status import HTTP_302_FOUND,HTTP_303_SEE_OTHER\r\n\r\napp = FastAPI()\r\n\r\n@app.post(\"/\")\r\nasync def login():\r\n # HTTP_302_FOUND,HTTP_303_SEE_OTHER : None is working:(\r\n return RedirectResponse(url=\"/ressource/1\",status_code=HTTP_303_SEE_OTHER)\r\n\r\n@app.get(\"/ressource/{r_id}\")\r\nasync def get_ressource(r_id:str):\r\n return {\"r_id\": r_id}\r\n\r\nif __name__ == '__main__':\r\n os.system(\"uvicorn tes:app --host 0.0.0.0 --port 80\")\r\n```\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a0e4d38bea74940de013e04a6d6f399d62f04280', 'files': [{'Loc': [58], 'path': None}]}", "own_code_loc": [{"Loc": [58], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "b93f8a709ab3923d1268dbc845f41985c0302b33", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/4551", "iss_label": "question\nquestion-migrate", "title": "Attribute not found while testing a Beanie Model inside fast api", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [x] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nMy Code:\r\n\r\n\r\nMy Route:\r\n\r\n@router.post(\"/login\")\r\nasync def internalLogin(request: Request,\r\n email: str = Form(...),\r\n password: str = Form(...)):\r\n try:\r\n res, token = await Controller.internalLogin(email=email, password=password)\r\n if res:\r\n return {\"message\": \"Success\"}\r\n else:\r\n return {\"message\": \"Failure\"}\r\n except DocumentNotFound as documentNotFoundException:\r\n return {\"message\": \"Error\"}\r\n```\r\n\r\nController:\r\n```\r\n@staticmethod\r\n async def internalLogin(email: str, password: str) -> List[bool | str]:\r\n logger.info(message=\"Inside OpenApi Controller\", fileName=__name__, functionName=\"OpenApiController\")\r\n try:\r\n user = await internalUserDb(email=email)\r\n if user is not None and user.verifyPassword(password):\r\n print(\"Logged In\")\r\n return [True, \"\"]\r\n else:\r\n print(\"Failed)\r\n return [False, \"\"]\r\n except DocumentNotFound as documentNotFound:\r\n raise documentNotFound\r\n\r\n```\r\n\r\nDB:\r\n\r\n```\r\nasync def internalUserDb(email: str) -> InternalUserModel:\r\n try:\r\n user: InternalUserModel = await InternalUserModel.find_one(InternalUserModel.email == email)\r\n return user\r\n except DocumentNotFound as documentNotFound:\r\n raise documentNotFound\r\n```\r\n\r\nMy TestCode:\r\n\r\n```\r\n@pytest.mark.anyio\r\nasync def testLogin():\r\n response = await asyncClient.post(\"/internalLogin\",\r\n data={\"email\": \"sample@mail.com\", \"password\": \"samplePass\"})\r\n assert response.status_code == 303\r\n```\r\n\r\nMy error while testing is: \r\n\r\n```\r\nFAILED Tests/TestLogin.py::testLogin[asyncio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'\r\nFAILED Tests/TestLogin.py::testLogin[trio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'\r\n```\r\n\r\n\r\n### Description\r\n\r\nHello, I am new to FastAPI. I am trying to test the fast api with PyTest. Normal tests are working perfectly fine but I am using MongoDB as backend to store my data. While I try to test the route that does some data fetching from database it shows error like `attribute not inside the model`. I am using Beanie ODM for MongoDB.\r\n\r\n### Operating System\r\n\r\nmacOS\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.73\r\n\r\n### Python Version\r\n\r\n3.10\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b93f8a709ab3923d1268dbc845f41985c0302b33', 'files': [{'path': 'docs/en/docs/advanced/testing-events.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/en/docs/advanced/testing-events.md"], "test": [], "config": [], "asset": []}} -{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "78b07cb809e97f400e196ff3d89862b9d5bd5dc2", "is_iss": 0, "iss_html_url": "https://github.com/fastapi/fastapi/issues/4587", "iss_label": "question\nquestion-migrate", "title": "Use the raw response in Reponse classes", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nclass CustomEncoder():\r\n def encode(self, dict_data)\r\n return dict_data\r\n\r\nclass PhotonJSONResponse(JSONResponse):\r\n def __init__(self, content: typing.Any = None, status_code: int = 200, headers: dict = None, media_type: str = None,\r\n background: BackgroundTask = None) -> None:\r\n # Fetch the untouched response in the upper stacks\r\n current_frame = inspect.currentframe()\r\n self.raw_response = None\r\n while current_frame.f_back:\r\n if 'raw_response' in current_frame.f_locals:\r\n self.raw_response = current_frame.f_locals['raw_response']\r\n break\r\n current_frame = current_frame.f_back\r\n \r\n self._encoder = CustomEncoder()\r\n super().__init__(content, status_code, headers, media_type, background)\r\n\r\n def render(self, content: Any) -> bytes:\r\n dict_data = self._encoder.encode(self.raw_response)\r\n return super().render(dict_data)\r\n```\r\n\r\n\r\n### Description\r\n\r\nI want to access the raw response that hasn't been through the json_encoder inside my response class. This is because I have custom types that are handled in a custom encoder. I have looked through the relevant fastapi code and I can't find a way to override the encoder for all requests either. As you can see in the example code I currently use reflection to fetch the raw_response in the upper stack frame, however this is not very reliable. I also can't seem to do this using an APIRoute implementation because it would require re-implementing the route handler which is messy, maybe it would be more relevant in there though.\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.63.0\r\n\r\n### Python Version\r\n\r\n3.8.12\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '78b07cb809e97f400e196ff3d89862b9d5bd5dc2', 'files': [{'path': 'fastapi/routing.py', 'Loc': {\"('APIRoute', None, 300)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/routing.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/3341", "iss_label": "bug", "title": "state isn't clearly understood how to incorporate for script.py", "body": "### Describe the bug\n\nI see that output_modifier and a few other functions require state object, which is not defined in script.py nor are any of the existing plugins (that I looked at) use a state object.\r\n\r\nAs a result, I am unable to use the functions. I get a message about needing to pass state\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\ntry to use this snippet\r\n\r\nhttps://github.com/ChobPT/oobaboogas-webui-langchain_agent/blob/main/script.py#L185-L190\r\n\r\n```\r\ndef input_modifier(string):\r\n if string[:3] == \"/do Story\":\r\n print('hi')\r\n string += ' Tell me a story.'\r\n else:\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0])\r\n return string.replace('/do ', '')\r\n\r\n```\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nFile \"/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py\", line 144, in input_modifier\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0],state_dict)\r\nNameError: name 'state_dict' is not defined\r\n\r\n```\r\n```\r\n File \"/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py\", line 144, in input_modifier\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0],state)\r\nNameError: name 'state' is not defined\r\n\r\n```\r\n\r\n```\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0])\r\nTypeError: output_modifier() missing 1 required positional argument: 'state'\r\n\r\n```\r\n\r\nand if I removed state from output_modifier (as you see in my snippet above w print) I get no modified output nor print statement at console\r\nOutput generated in 1.99 seconds (9.06 tokens/s, 18 tokens, context 66, seed 123523724)\r\nTraceback (most recent call last):\r\n File \"/home/user/oobabooga_linux/text-generation-webui/server.py\", line 1181, in <module>\r\n time.sleep(0.5)\n```\n\n\n### System Info\n\n```shell\npython 3.9 oracle linux 8.5\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92', 'files': []}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ChobPT", "pro": "oobaboogas-webui-langchain_agen", "path": ["script.py"]}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["script.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "8962bb173e9bdc36eb9cf28fe9e1952b2976e781", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5337", "iss_label": "bug", "title": "Generation slows at max context, even when truncated", "body": "### Describe the bug\r\n\r\n### Issue Summary\r\nWhen generating, if the context is near the maximum set via n_ctx (and the truncate value in Parameters is set to match it), generation will be quite slow. This does not occur if the context is more than approximately 300-500 below the set value. It still occurs even if the n_ctx and truncation numbers are reduced (though the slowdown becomes less severe.\r\n\r\n### Observations\r\n\r\n- Since speed is perfectly fine up until we near the context limit, then immediately drops, I suspect this has something to do with how the context is truncated; the actual act of truncating the input seems to cause the slowdown, despite the fact that this should be a simple operation.\r\n- Increasing the limit back up after lowering also does not help;; makes sense, since it just pulls in as much of the conversation as will fit and hits the context limit again, requiring truncation. \r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n- Set your n_ctx to a given value. (In my case, 8192).\r\n- Chat with the model, noting the speed. At this point, it should be fairly rapid. (In my case, 4.72 tokens/s up to context 7792).\r\n- As soon as the context reaches approximately 7800, generation slows. (In my case, 0.87 tokens/s on the message immediately after the above, at context 7798).\r\n- At this point, reducing n_ctx and reloading the model only partially helps. (In my case; reducing to 4092 produced 2.51 tokens/s at context 3641.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nN/A\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\n- Model: TheBloke/Silicon-Maid-7B-GGUF, using the 5_K_M quant.\r\n- Branch: dev\r\n- Commit: 8962bb173e9bdc36eb9cf28fe9e1952b2976e781\r\n- OS: Windows 11\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8962bb173e9bdc36eb9cf28fe9e1952b2976e781', 'files': [{'path': 'modules/ui_model_menu.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui_model_menu.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "564a8c507fffc8b25a056d8930035c63da71fc7b", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/3042", "iss_label": "bug", "title": "ERROR:Task exception was never retrieved", "body": "### Describe the bug\n\nRight after installation i open the webui in the browser and i receive an error.\n\n### Is there an existing issue for this?\n\n- [x] I have searched the existing issues\n\n### Reproduction\n\nRight after installation i open the webui in the browser and i receive this error.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\n2023-07-07 21:25:11 ERROR:Task exception was never retrieved\r\nfuture: <Task finished name='3s4vbrhqz8a_103' coro=<Queue.process_events() done, defined at D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py:343> exception=1 validation error for PredictBody\r\nevent_id\r\n Field required [type=missing, input_value={'data': [], 'event_data'...on_hash': '3s4vbrhqz8a'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.1.2/v/missing>\r\nTraceback (most recent call last):\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 347, in process_events\r\n client_awake = await self.gather_event_data(event)\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 220, in gather_event_data\r\n data, client_awake = await self.get_message(event, timeout=receive_timeout)\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 456, in get_message\r\n return PredictBody(**data), True\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\pydantic\\main.py\", line 150, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody\r\nevent_id\r\n Field required [type=missing, input_value={'data': [], 'event_data'...on_hash': '3s4vbrhqz8a'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.1.2/v/missing\n```\n\n\n### System Info\n\n```shell\nWindows 11\r\nEVGA RTX3080\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '564a8c507fffc8b25a056d8930035c63da71fc7b', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "07510a24149cbd6fd33df0c4a440d60b9783a18e", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2171", "iss_label": "enhancement\nstale", "title": "support for fastest-inference-4bit branch of GPTQ-for-LLaMa", "body": "**Description**\r\n\r\nThere is new branch of GPTQ-for-LLaMa - fastest-inference-4bit that combines triton and cuda and people say it's much faster. It would be nice if it was supported here. I tried to compile it myself but it doesnt work with this webui because there is no llama_inference_offload.py in the new branch. \r\n\r\n**Additional Context**\r\nhttps://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-inference-4bit\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '07510a24149cbd6fd33df0c4a440d60b9783a18e', 'files': [{'path': 'modules/GPTQ_loader.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/GPTQ_loader.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "7ddf6147accfb5b95e7dbbd7f1822cf976054a2a", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/446", "iss_label": "bug", "title": "Factual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047", "body": "### Describe the bug\n\nI get factual answers in ?? like this Factual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nCommon sense questions and answers\r\n\r\nQuestion: Hi\r\nFactual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047\n\n### Screenshot\n\n<img width=\"1535\" alt=\"Screenshot 2023-03-20 at 12 43 35 AM\" src=\"https://user-images.githubusercontent.com/25454015/226214371-e9424c75-6b81-4189-9865-70446b62235d.png\">\r\n\n\n### Logs\n\n```shell\nLoading LLaMA-7b...\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 33/33 [00:06<00:00, 5.47it/s]\r\nLoaded the model in 147.25 seconds.\r\nOutput generated in 12.96 seconds (4.71 tokens/s, 61 tokens)\r\nOutput generated in 13.20 seconds (0.61 tokens/s, 8 tokens)\n```\n\n\n### System Info\n\n```shell\nMacOS Ventura 13.2.1, Apple M1 Max\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7ddf6147accfb5b95e7dbbd7f1822cf976054a2a', 'files': [{'path': 'download-model.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\n\u7ed3\u679c\u5947\u602a", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["download-model.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "3609ea69e4c4461a4f998bd12cc559d5a016f328", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5761", "iss_label": "bug", "title": "api broke: AttributeError: 'NoneType' object has no attribute 'replace'", "body": "### Describe the bug\n\napi calls result in\r\nAttributeError: 'NoneType' object has no attribute 'replace'\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\ninstall no requirements and llama-cpp-python by source then try to run curl\r\n\r\ncurl http://192.168.3.17:5000/v1/chat/completions \\\r\n -H \"Content-Type: application/json\" \\\r\n -d '{'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'tell me a story.'}], 'max_new_tokens': 1024, 'preset': 'None', 'do_sample': False, 'temperature': 1.0, 'top_p': 0, 'typical_p': 1, 'epsilon_cutoff': 0, 'eta_cutoff': 0, 'tfs': 1, 'top_a': 0, 'repetition_penalty': 1.18, 'repetition_penalty_range': 0, 'top_k': 50, 'min_length': 0, 'no_repeat_ngram_size': 2, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': True, 'mirostat_mode': 0, 'mirostat_tau': 5, 'mirostat_eta': 0.1, 'seed': -1, 'add_bos_token': True, 'truncation_length': 1068, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [], 'mode': 'instruct', 'instruction_template': 'Alpaca'}'\r\n\r\nException in ASGI application\r\nTraceback (most recent call last):\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 411, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 69, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/applications.py\", line 1054, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/applications.py\", line 123, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 186, in __call__\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 164, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/cors.py\", line 83, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 62, in __call__\r\n await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 758, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 778, in app\r\n await route.handle(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 299, in handle\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 79, in app\r\n await wrap_app_handling_exceptions(app, request)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 74, in app\r\n response = await func(request)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 278, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/data/text-generation-webui/extensions/openai/script.py\", line 137, in openai_chat_completions\r\n response = OAIcompletions.chat_completions(to_dict(request_data), is_legacy=is_legacy)\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 536, in chat_completions\r\n return deque(generator, maxlen=1).pop()\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 315, in chat_completions_common\r\n prompt = generate_chat_prompt(user_input, generate_params)\r\n File \"/data/text-generation-webui/modules/chat.py\", line 97, in generate_chat_prompt\r\n user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),\r\n File \"/data/text-generation-webui/modules/chat.py\", line 636, in replace_character_names\r\n text = text.replace('{{user}}', name1).replace('{{char}}', name2)\r\nAttributeError: 'NoneType' object has no attribute 'replace'\r\n\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\ninstall no avx2 requirements and llama-cpp-python by source then try to run curl\r\n\r\ncurl http://192.168.3.17:5000/v1/chat/completions \\\r\n -H \"Content-Type: application/json\" \\\r\n -d '{'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'tell me a story.'}], 'max_new_tokens': 1024, 'preset': 'None', 'do_sample': False, 'temperature': 1.0, 'top_p': 0, 'typical_p': 1, 'epsilon_cutoff': 0, 'eta_cutoff': 0, 'tfs': 1, 'top_a': 0, 'repetition_penalty': 1.18, 'repetition_penalty_range': 0, 'top_k': 50, 'min_length': 0, 'no_repeat_ngram_size': 2, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': True, 'mirostat_mode': 0, 'mirostat_tau': 5, 'mirostat_eta': 0.1, 'seed': -1, 'add_bos_token': True, 'truncation_length': 1068, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [], 'mode': 'instruct', 'instruction_template': 'Alpaca'}'\r\n\r\nException in ASGI application\r\nTraceback (most recent call last):\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 411, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 69, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/applications.py\", line 1054, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/applications.py\", line 123, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 186, in __call__\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 164, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/cors.py\", line 83, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 62, in __call__\r\n await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 758, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 778, in app\r\n await route.handle(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 299, in handle\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 79, in app\r\n await wrap_app_handling_exceptions(app, request)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 74, in app\r\n response = await func(request)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 278, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/data/text-generation-webui/extensions/openai/script.py\", line 137, in openai_chat_completions\r\n response = OAIcompletions.chat_completions(to_dict(request_data), is_legacy=is_legacy)\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 536, in chat_completions\r\n return deque(generator, maxlen=1).pop()\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 315, in chat_completions_common\r\n prompt = generate_chat_prompt(user_input, generate_params)\r\n File \"/data/text-generation-webui/modules/chat.py\", line 97, in generate_chat_prompt\r\n user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),\r\n File \"/data/text-generation-webui/modules/chat.py\", line 636, in replace_character_names\r\n text = text.replace('{{user}}', name1).replace('{{char}}', name2)\r\nAttributeError: 'NoneType' object has no attribute 'replace'\n```\n\n\n### System Info\n\n```shell\noracle linux 8, rocky linux 9\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '3609ea69e4c4461a4f998bd12cc559d5a016f328', 'files': [{'path': 'modules/chat.py', 'Loc': {\"(None, 'replace_character_names', 637)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/chat.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5774", "iss_label": "bug", "title": "The checksum verification for miniconda_installer.exe has failed.", "body": "### Describe the bug\n\nThe checksum verification for miniconda_installer.exe has failed.\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nAfter I extracted the files, I clicked start_windows.bat.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nDownloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Windows-x86_64.exe to D:\\text-generation-webui\\installer_files\\miniconda_installer.exe\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 53.8M 100 53.8M 0 0 23.2M 0 0:00:02 0:00:02 --:--:-- 23.3M\r\nfind: '/i': No such file or directory\r\nfind: '/v': No such file or directory\r\nfind: ' ': No such file or directory\r\nfind: '/i': No such file or directory\r\nfind: '307194e1f12bbeb52b083634e89cc67db4f7980bd542254b43d3309eaf7cb358': No such file or directory\r\nThe checksum verification for miniconda_installer.exe has failed.\n```\n\n\n### System Info\n\n```shell\nwindows11,CPU:i711800H,GPU:NVDIA RTXA2000Laptop\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a', 'files': [{'path': 'start_windows.bat', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["start_windows.bat"]}} -{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "c17624432726ab5743dfa21af807d559e4f4ff8c", "is_iss": 0, "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/6209", "iss_label": "bug\nstale", "title": "Oobabooga login not working through reverse proxy", "body": "### Describe the bug\n\nI have the latest text-generation-webui (just ran the update script) running on my home computer running Windows 11. I am running it on a LAN IP (192.168.1.102) and reverse-proxying it with Nginx so I can access it remotely over the Internet.\r\n\r\nSome recent update to text-generation-webui appears to have broken the login code. When I'm logging in from the LAN, I see the normal login screen, and authentication works. When I'm logging in from the WAN, I get a bare-bones UI which refuses to accept my login creds. \r\n\r\nI have been running this setup for months without change, so my assumption is that it's a recent change in the text-generation-webui codebase that's behind it.\r\n\r\nMy CMD_FLAGS.txt is:\r\n\r\n--gradio-auth myusername:mypassword\r\n--auto-devices\r\n--listen\r\n--listen-host 192.168.1.102\r\n--listen-port 7860\r\n\r\n\r\n\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\n1. Start the webui on a WAN port.\r\n2. Reverse-proxy to a publically-accessible IP.\r\n3. Try to login.\n\n### Screenshot\n\n![Oobaboga_Login](https://github.com/oobabooga/text-generation-webui/assets/13558208/823b2df8-d4e8-43c1-ab93-beb72cf6cae7)\r\n\n\n### Logs\n\n```shell\nI see repeated errors in the console: \"WARNING: invalid HTTP request received\", but no Python trace info.\n```\n\n\n### System Info\n\n```shell\nWindows 11, NVidia Founder RTX 2060 Super.\r\n\r\nReverse proxy is NGinx running on Debian. It uses Let's Encrypt so I can encrypt my remote connection.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c17624432726ab5743dfa21af807d559e4f4ff8c', 'files': [{'path': 'requirements/full/requirements.txt', 'Loc': {'(None, None, 7)': {'mod': [7]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements/full/requirements.txt"], "asset": []}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "69d863b44ab5c7dad6eea04b7e3563f491c714a4", "is_iss": 0, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/376", "iss_label": "", "title": "Unable to select camera device through UI", "body": "It would be nice to have a way to select which camera to use. I am on Ubuntu 22.04 with a Linux laptop. Since I use an external camera and keep my laptop closed, the program is defaulting to the on-board camera.\r\n\r\nI was unable to find a quick/easy way to change the default camera in Ubuntu, so it would be nice if the program was able to allow a selection in the UI.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '69d863b44ab5c7dad6eea04b7e3563f491c714a4', 'files': [{'path': 'modules/ui.py', 'Loc': {\"(None, 'webcam_preview', 252)\": {'mod': [259]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "080d6f5110d2e185e8ce4e10451ac96313079be2", "is_iss": 0, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/315", "iss_label": "", "title": "How to select the correct camera?", "body": "How to select the correct camera ? \r\nIs there any method to improve the output resolution of the camera?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '080d6f5110d2e185e8ce4e10451ac96313079be2', 'files': [{'path': 'modules/ui.py', 'Loc': {\"(None, 'webcam_preview', 252)\": {'mod': [259]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "5bc3ada6324a28a8d8556da1176b546f2d2140f8", "is_iss": 0, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/922", "iss_label": "", "title": "ERROR: Cannot install -r requirements.txt (line 13), tensorflow and typing-extensions>=4.8.0 because these package versions have conflicting dependencies.", "body": "The conflict is caused by:\n The user requested typing-extensions>=4.8.0\n torch 2.5.1+cu121 depends on typing-extensions>=4.8.0\n tensorflow-intel 2.12.1 depends on typing-extensions<4.6.0 and >=3.6.6", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5bc3ada6324a28a8d8556da1176b546f2d2140f8', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 19)': {'mod': [19]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "6b0cc749574d7307b2f7deedfa2a0dbb363329da", "is_iss": 0, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/243", "iss_label": "", "title": "[experimental] doesn't show the camera I want..", "body": "I'm using the `experimental` branch so I could choose the camera I wanted (OBS Virtual Camera) which is (2) but it only shows \"Camera 0\", so I made a test script and I was able to pull my OBS Virtual Camera using 'matplotlib',\r\n\r\n```\r\n(venv) (base) PS E:\\deep-live-cam> python list.py\r\n[ WARN:0@10.769] global cap_msmf.cpp:1769 CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638\r\n[ WARN:0@10.839] global cap.cpp:304 cv::VideoCapture::open VIDEOIO(DSHOW): raised OpenCV exception:\r\n\r\nOpenCV(4.10.0) D:\\a\\opencv-python\\opencv-python\\opencv\\modules\\videoio\\src\\cap_dshow.cpp:2763: error: (-215:Assertion failed) pVih in function 'videoInput::start'\r\n\r\n\r\n[ERROR:0@10.846] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.478] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.563] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.635] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.711] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.787] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.862] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\nAvailable camera indices: [2]\r\nEnter the camera index you want to use: 2\r\nCamera 2 opened successfully. Press 'q' to quit.\r\nPress 'q' and Enter to quit, or just Enter to continue: q\r\n(venv) (base) PS E:\\deep-live-cam>\r\n```\r\n\r\nIt shows up like this:\r\n\r\n<img width=\"419\" alt=\"Screen Shot 2024-08-12 at 8 31 51 PM\" src=\"https://github.com/user-attachments/assets/3f16b4f6-6ac7-492f-88a5-6abdc58e29b0\">\r\n\r\nSo I know it's possible, is there a way to force 'deep-live-cam' to use \"Camera (2)\" ?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6b0cc749574d7307b2f7deedfa2a0dbb363329da', 'files': [{'path': 'modules/ui.py', 'Loc': {\"(None, 'webcam_preview', 307)\": {'mod': [322]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "513e41395687921d589fc10bbaf2f72ed579c84a", "is_iss": 0, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/915", "iss_label": "", "title": "Subject: Missing ui.py file in modules directory - preventing project execution", "body": "Hi,\n\nI'm trying to run the Deep-Live-Cam project, but I'm encountering a problem. The ui.py file is missing from the modules directory. I've tried the following:\n\n* Cloning the repository using git clone: `git clone https://github.com/hacksider/Deep-Live-Cam.git`\n* Cloning the repository using GitHub Desktop.\n* Downloading the repository as a ZIP file.\n\nIn all cases, the ui.py file is not present. I've also checked the repository on GitHub.com directly in my browser, and the file is missing there as well.\n\nThe modules directory contains the following files: [List the files you see].\n\nCould you please let me know how to obtain the ui.py file? Is it intentionally missing, or is there a separate download/generation step required?\n\nThanks for your help!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '513e41395687921d589fc10bbaf2f72ed579c84a', 'files': [{'path': 'modules/ui.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "a49d3fc6e5a228a6ac92e25831c507996fdc0042", "is_iss": 1, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/697", "iss_label": "", "title": "[Solved] inswapper_128_fp16.onnx failed:Protobuf parsing failed", "body": "I have this error on macOS Apple Silicon.\r\n`Exception in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/Cellar/python@3.10/3.10.15/Frameworks/Python.framework/Versions/3.10/lib/python3.10/tkinter/__init__.py\", line 1921, in __call__\r\n return self.func(*args)\r\n File \"/Users//PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/customtkinter/windows/widgets/ctk_button.py\", line 554, in _clicked\r\n self._command()\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 242, in <lambda>\r\n command=lambda: webcam_preview(\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 649, in webcam_preview\r\n create_webcam_preview(\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 707, in create_webcam_preview\r\n temp_frame = frame_processor.process_frame(source_image, temp_frame)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 65, in process_frame\r\n temp_frame = swap_face(source_face, target_face, temp_frame)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 49, in swap_face\r\n return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 44, in get_face_swapper\r\n FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 96, in get_model\r\n model = router.get_model(providers=providers, provider_options=provider_options)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 40, in get_model\r\n session = PickableInferenceSession(self.onnx_file, **kwargs)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 25, in __init__\r\n super().__init__(model_path, **kwargs)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 347, in __init__\r\n self._create_inference_session(providers, provider_options, disabled_optimizers)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 384, in _create_inference_session\r\n sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)\r\nonnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/PycharmProjects/Deep-Live-Cam/models/inswapper_128_fp16.onnx failed:Protobuf parsing failed.`\r\n\r\n\r\nThis https://github.com/hacksider/Deep-Live-Cam/issues/613 didn't help. \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "hacksider", "pro": "deep-live-cam", "path": ["inswapper_128_fp16.onnx"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2\n+\n0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["inswapper_128_fp16.onnx"]}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "d4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33", "is_iss": 0, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/94", "iss_label": "", "title": "Can't find onnxruntime-silicon==1.13.1", "body": "Hi,\r\n\r\nCurrently on MacOS (Silicon, M2 Max), it seems not possible to download (with pip at least) the 1.13.1 version of onnxruntime.\r\n\r\n`ERROR: Could not find a version that satisfies the requirement onnxruntime-silicon==1.13.1 (from versions: 1.14.1, 1.15.0, 1.16.0, 1.16.3)\r\nERROR: No matching distribution found for onnxruntime-silicon==1.13.1`\r\n\r\nAnd, if I'm right, Deep-Live-Cam doesn't support more recent versions of onnxruntime, right ? So if that's the case, what could be a workaround ?\r\n\r\nThanks !", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 16)': {'mod': [16]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "install require"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} -{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa", "is_iss": 0, "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/345", "iss_label": "", "title": "Program crashes when processing with DirectML", "body": "I am using an AMD RX 6600 XT GPU with the latest drivers and attempting to run the program with DirectML. The program's UI turns white and then crashes. It works fine with CPU execution but fails with DirectML.\r\nI already tried to reinstall onnxruntime-directml with no effect. Terminal:\r\n\r\n (myenv) E:\\Edesktop\\deep-live\\Deep-Live-Cam>python run.py --execution-provider dml\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5\r\nset det-size: (640, 640)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 100/100 [00:01<00:00, 50.67it/s]\r\n[DLC.CORE] Creating temp resources...\r\n[DLC.CORE] Extracting frames...\r\n[DLC.FACE-SWAPPER] Progressing...\r\nProcessing: 0%| | 0/125 [00:00<?, ?frame/s, execution_providers=['DmlExecutionProvider'], execution_threads=8, max_memory=16\r\n(myenv) E:\\Edesktop\\deep-live\\Deep-Live-Cam>\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa', 'files': [{'path': 'modules/ui.py', 'Loc': {\"(None, 'create_root', 93)\": {'mod': [139, 140, 141]}}, 'status': 'modified'}, {'path': 'modules/core.py', 'Loc': {\"(None, 'parse_args', 47)\": {'mod': [67, 71]}, '(None, None, None)': {'mod': [11]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py", "modules/core.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Textualize", "repo_name": "rich", "base_commit": "7e1928efee53da1ac7d156912df04aef83eefea5", "is_iss": 0, "iss_html_url": "https://github.com/Textualize/rich/issues/1247", "iss_label": "Needs triage", "title": "[REQUEST] Extra caching for `get_character_cell_size`", "body": "**How would you improve Rich?**\r\n\r\nAdd a small `lru_cache` to https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L28 , similar to cache one layer down for https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L46\r\n\r\nSize `4096` was plenty for what I describe below.\r\n\r\n**What problem does it solved for you?**\r\n\r\nI'm working on some optimizations for a TUI application here https://github.com/JoshKarpel/spiel/pull/37\r\n\r\nThis was my first idea on how to improve rendering time, based on https://github.com/benfred/py-spy telling me that a lot of time was being spent in `get_character_cell_size`, and this was my first thought for a solution.\r\n\r\nAdding the cache described above gives a ~30% speedup on the benchmarks I was using to work on that PR. In that application I'm repeatedly re-rendering the same content (in a `Live`), so adding a small cache to `get_character_cell_size` represents a significant speedup since the set of characters is usually the same from frame to frame. The benchmark is mostly printing colorized ASCII, with some unicode also drawn from a small set (box-drawing characters, block shapes, etc.). \r\n\r\nI guess that since there's lots of `Layout` and `Padding` going on, the most common character is probably space... perhaps the ASCII set that there's already a shortcut for could just be pre-computed and stored in a set? There's probably a lot of good ways to approach this :) ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7e1928efee53da1ac7d156912df04aef83eefea5', 'files': [{'path': 'rich/cells.py', 'Loc': {\"(None, 'get_character_cell_size', 28)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/cells.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Textualize", "repo_name": "rich", "base_commit": "5c9161d0c48254fb579827249a9ee7d88f4589b7", "is_iss": 0, "iss_html_url": "https://github.com/Textualize/rich/issues/1489", "iss_label": "Needs triage", "title": "[REQUEST] current item of a progress", "body": "when creating progress bars for logical items (that are then supported with additional progress pars,\r\ni would consider it helpful if it was possible to add a name/render able for the current item, and to push those in updates\r\n\r\ni`m not yet sure how this is best expressed/implemented", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5c9161d0c48254fb579827249a9ee7d88f4589b7', 'files': [{'path': 'rich/progress.py', 'Loc': {\"('Progress', 'update', 739)\": {'mod': []}}, 'status': 'modified'}, {'path': 'rich/progress.py', 'Loc': {\"('Task', None, 437)\": {'mod': [466]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/progress.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Textualize", "repo_name": "rich", "base_commit": "0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80", "is_iss": 0, "iss_html_url": "https://github.com/Textualize/rich/issues/2457", "iss_label": "bug", "title": "[BUG] Console(no_color=True) does not work on Windows 10", "body": "You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).\r\n\r\n**Describe the bug**\r\n\r\nThe \"no_color=True\" Console parameter does not seem to do anything on Windows 10. I tested on both Cmder and native cmd.exe terminals and got the same results. See screenshots below.\r\n\r\nCmder:\r\n![richbug01](https://user-images.githubusercontent.com/7690118/183566141-724f7390-f9f9-4063-bf31-b0144e391975.PNG)\r\n\r\ncmd.exe\r\n![richbug02](https://user-images.githubusercontent.com/7690118/183566181-5ef45bf6-366c-4c69-b6f8-6ad25d5aff41.PNG)\r\n\r\nfor reference, this is what it looks like from my Ubuntu laptop:\r\n\r\n![richbug-linux-ok](https://user-images.githubusercontent.com/7690118/183566308-62bbd545-1c90-4345-bd3c-a228ea0f5f35.png)\r\n\r\nAlso happy to help fix this if you can point me in the right direction. Thank you!\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nOS: Windows 10\r\n\r\n**Cmder:**\r\n\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502 <console width=155 ColorSystem.WINDOWS> \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'windows' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 83 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = True \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=155, height=83), \u2502\r\n\u2502 legacy_windows=True, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=155, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=83, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=155, height=83) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 155 \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\u250c\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u2510\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'cygwin', \u2502\r\n\u2502 'COLORTERM': None, \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': None, \u2502\r\n\u2502 'COLUMNS': '157', \u2502\r\n\u2502 'LINES': '83', \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\nplatform=\"Windows\"\r\n\r\n\r\n**cmd.exe**\r\n\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 A high level console interface. \u2502 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 <console width=119 ColorSystem.WINDOWS> \u2502 \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2502 \u2502 color_system = 'windows' \u2502 \u2502 encoding = 'utf-8' \u2502 \u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502 \u2502 height = 30 \u2502 \u2502 is_alt_screen = False \u2502 \u2502 is_dumb_terminal = False \u2502 \u2502 is_interactive = True \u2502 \u2502 is_jupyter = False \u2502 \u2502 is_terminal = True \u2502 \u2502 legacy_windows = True \u2502 \u2502 no_color = False \u2502 \u2502 options = ConsoleOptions( \u2502 \u2502 size=ConsoleDimensions(width=119, height=30), \u2502 \u2502 legacy_windows=True, \u2502 \u2502 min_width=1, \u2502 \u2502 max_width=119, \u2502 \u2502 is_terminal=True, \u2502 \u2502 encoding='utf-8', \u2502 \u2502 max_height=30, \u2502 \u2502 justify=None, \u2502 \u2502 overflow=None, \u2502 \u2502 no_wrap=False, \u2502 \u2502 highlight=None, \u2502 \u2502 markup=None, \u2502 \u2502 height=None \u2502 \u2502 ) \u2502 \u2502 quiet = False \u2502 \u2502 record = False \u2502 \u2502 safe_box = True \u2502 \u2502 size = ConsoleDimensions(width=119, height=30) \u2502 \u2502 soft_wrap = False \u2502 \u2502 stderr = False \u2502 \u2502 style = None \u2502 \u2502 tab_size = 8 \u2502 \u2502 width = 119 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u250c\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u2510 \u2502 Windows features available. \u2502 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2502 \u2502 truecolor = False \u2502 \u2502 vt = False \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u250c\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 { \u2502 \u2502 'TERM': None, \u2502 \u2502 'COLORTERM': None, \u2502 \u2502 'CLICOLOR': None, \u2502 \u2502 'NO_COLOR': None, \u2502 \u2502 'TERM_PROGRAM': None, \u2502 \u2502 'COLUMNS': None, \u2502 \u2502 'LINES': None, \u2502 \u2502 'JUPYTER_COLUMNS': None, \u2502 \u2502 'JUPYTER_LINES': None, \u2502 \u2502 'JPY_PARENT_PID': None, \u2502 \u2502 'VSCODE_VERBOSE_LOGGING': None \u2502 \u2502 } \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 platform=\"Windows\" \r\n\r\nrich==12.5.1\r\n\r\n</details>\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80', 'files': [{'path': 'rich/console.py', 'Loc': {\"('Console', None, 583)\": {'mod': [612]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/console.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "427cc215310804127b55744fcc3664ede38a4a0d", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/21363", "iss_label": "question", "title": "How does youtube-dl detect advertisements?", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions\r\n- Search the bugtracker for similar questions: http://yt-dl.org/search-issues\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [x] I'm asking a question\r\n- [x] I've looked through the README and FAQ for similar questions\r\n- [x] I've searched the bugtracker for similar questions including closed ones\r\n\r\n\r\n## Question\r\n\r\n<!--\r\nAsk your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.\r\n-->\r\n\r\nFox Sports Go recently changed their streaming service. Previously, I used to be able to record live streams and download event replays by passing headers into streamlink. However, recording live with streamlink \"works\" just fine, but because commercials have some kind of different codec than the actual content, I can't do anything with the resulting .ts file.\r\n\r\nHowever, I can download replays from FOX.com just fine, using a youtube-dl command like this: `youtube-dl --hls-prefer-native -f 3750 https://content-auso1.uplynk.com/preplay2/6f324d0648b34576b36ce49160add428/391dec8c1a9a07b70d3062e4bf1a6e3c/4sQNPrWNbJWMzPMP2RXiNy2SFAhlIDUYbUwS2TJwN94h.m3u8?pbs=38dc148aad7c4a7f981a8dd57493a625`\r\n\r\nThe big problems with this are that a) I have to wait until a replay is posted; and b) FOX is very inconsistent as to which events get replays posted and which do not, meaning I'm SOL if I'm trying to save an event that just doesn't have a replay for some reason. If I could record live, this wouldn't be an issue, but again, the commercials are throwing things off.\r\n\r\nOne of the output lines from youtube-dl is `[hlsnative] Total fragments: 1815 (not including 504 ad)`.\r\n\r\nSo my question is: how does youtube-dl detect which segments are ads in the .m3u8 file? If I can figure that out, perhaps I can rig streamlink to ignore those segments when recording, saving me a lot of trouble.\r\n\r\nThanks!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '427cc215310804127b55744fcc3664ede38a4a0d', 'files': [{'path': 'youtube_dl/downloader/hls.py', 'Loc': {\"('HlsFD', 'is_ad_fragment_start', 78)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/downloader/hls.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "8b7340a45eb0e3aeaa996896ff8690b6c3a32af6", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/15955", "iss_label": "", "title": "use youtube-dl with cookies file in code not from command line ", "body": "## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.03.20*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2018.03.20**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [ ] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [X ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']\r\n[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251\r\n[debug] youtube-dl version 2018.03.20\r\n[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2\r\n[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n...\r\n<end of log>\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: https://www.youtube.com/watch?v=BaW_jenozKc\r\n- Single video: https://youtu.be/BaW_jenozKc\r\n- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc\r\n\r\nNote that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nExplanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.\r\nIf work on your *issue* requires account credentials please provide them or explain how one can obtain them.\r\n\r\n\r\n\r\n\r\n\r\n```\r\nfrom __future__ import unicode_literals\r\nimport youtube_dl\r\n\r\nydl_opts = {}\r\nwith youtube_dl.YoutubeDL(ydl_opts) as ydl:\r\n ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])\r\n```\r\nthis for downoalding simple youtube video i need how add the cookies file untill i can downoald from my account on linda im trying to create small downaolder untill help fast the process any idea how add cookies file", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8b7340a45eb0e3aeaa996896ff8690b6c3a32af6', 'files': [{'path': 'youtube_dl/YoutubeDL.py', 'Loc': {\"('YoutubeDL', None, 113)\": {'mod': [208]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/YoutubeDL.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "267d81962a0709f15f82f96b7aadbb5473a06992", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16870", "iss_label": "", "title": "[bilibili]how can i download video on page2?", "body": "### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.25*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.06.25**\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [x] Question\r\n- [ ] Other\r\n\r\nI try to use youtube-dl to download a video on bilibili like https://www.bilibili.com/video/av18178195\r\n\r\nThe video have 2 pages, but when i type **youtube-dl -f 1 https://www.bilibili.com/video/av18178195**\r\ni just get the video on page1, how can i get video on page2?\r\nI have see this page https://github.com/rg3/youtube-dl/pull/16354\r\nbut i use \r\n**youtube-dl -f 1 https://www.bilibili.com/video/av18178195/index_2.html** or \r\n**youtube-dl -f 1 https://www.bilibili.com/video/av18178195/?p=2**\r\n\r\nIt will get the same video on page1\r\nHow can i solve this problem? Thank you.\r\nIs this problem fixed? I use the standalone exe version.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '267d81962a0709f15f82f96b7aadbb5473a06992', 'files': [{'path': 'youtube_dl/extractor/bilibili.py', 'Loc': {\"('BiliBiliIE', None, 25)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/bilibili.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16883", "iss_label": "", "title": "[Feature request] Network retry, with configurability", "body": "I just ran some large youtube-dl scripts, and noticed a few videos were missing finally.\r\n\r\nThis was probably due to intermittent network downtimes, and apparently youtube-dl doesn't do any network retry at all (I may be wrong).\r\n\r\nThus, I suggest adding an option named for example `--network-retry`, related to `--socket-timeout`. The default would be 0 to keep the current youtube-dl behavior, and I could configure it to something like 5.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71', 'files': [{'path': 'youtube_dl/options.py', 'Loc': {\"(None, 'parseOpts', 41)\": {'mod': [458, 462]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/options.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "5014bd67c22b421207b2650d4dc874b95b36dda1", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/30539", "iss_label": "question", "title": "velocidad de descarga limitada", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions\r\n- Search the bugtracker for similar questions: http://yt-dl.org/search-issues\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [ yes] I'm asking a question\r\n- [ yes] I've looked through the README and FAQ for similar questions\r\n- [yes ] I've searched the bugtracker for similar questions including closed ones\r\n\r\n\r\n## Question\r\n\r\n<!--\r\nAsk your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.\r\n-->\r\n\r\nWRITE QUESTION HERE\r\n\r\nhola .. hace unos dias estoy experimento una baja en la velocidad de descarga desde la pagina de youtube usando youtube-dl .. lo pueden resolver? probe bajando videos desde otros sitios webs y descarga a toda velocidad .. solo me pasa desde la pagina de youtube .. para mi hicieron algun cambio en su plataforma ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5014bd67c22b421207b2650d4dc874b95b36dda1', 'files': [{'path': 'youtube_dl/extractor/youtube.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "e90d175436e61e207e0b0cae7f699494dcf15922", "is_iss": 0, "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/9104", "iss_label": "", "title": "Chinese title was missing !", "body": "```\nroot@kangland:/var/www/ydy# youtube-dl -v w0dMz8RBG7g\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'-v', u'w0dMz8RBG7g']\n[debug] Encodings: locale ANSI_X3.4-1968, fs ANSI_X3.4-1968, out ANSI_X3.4-1968, pref ANSI_X3.4-1968\n[debug] youtube-dl version 2016.04.01\n[debug] Python version 2.7.6 - Linux-2.6.32-042stab113.11-i686-with-Ubuntu-14.04-trusty\n[debug] exe versions: none\n[debug] Proxy map: {}\n[youtube] w0dMz8RBG7g: Downloading webpage\n[youtube] w0dMz8RBG7g: Downloading video info webpage\n[youtube] w0dMz8RBG7g: Extracting video information\n[youtube] {22} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] w0dMz8RBG7g: Downloading player https://s.ytimg.com/yts/jsbin/player-en_US-vfli5QvRo/base.js\n[youtube] {43} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {18} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {5} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {36} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {17} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {136} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {247} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {135} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {244} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {134} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {243} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {133} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {242} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {160} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {278} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {140} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {171} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {249} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {250} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {251} signature length 41.43, html5 player en_US-vfli5QvRo\n[debug] Invoking downloader on u'https://r2---sn-a8au-vgqe.googlevideo.com/videoplayback?ms=au&mt=1460039622&pl=40&mv=m&key=yt6&pte=yes&mm=31&mn=sn-a8au-vgqe&sver=3&fexp=9407059%2C9416126%2C9416891%2C9420452%2C9422596%2C9423662%2C9426926%2C9427902%2C9428398%2C9432364&ratebypass=yes&ipbits=0&initcwndbps=26957500&expire=1460061513&upn=NhCteH8M5OA&mime=video%2Fmp4&axtags=tx%3D9417362&id=o-AEE-ylzEiNeRWF2HIs5_rsDGUftXqgxkV7V0eUSq7oZ4&dur=214.111&source=youtube&ip=2602%3Aff62%3A104%3Ae6%3A%3A&sparams=axtags%2Cdur%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cpte%2Cratebypass%2Crequiressl%2Csource%2Cupn%2Cexpire&requiressl=yes&lmt=1458219184364643&itag=22&signature=B1E1AF27412C916392FF49F1D60F0771145BE274.DA5587721204D947940DB57A584188E732C36433'\n[download] Destination: Wanting - (You Exist In My Song) [Trad. Chinese] [Official Music Video]-w0dMz8RBG7g.mp4\n[download] 100% of 32.20MiB in 00:00\n\n```\n\n```\nroot@kangland:/var/www/ydy# locale\nLANG=\nLANGUAGE=\nLC_CTYPE=\"POSIX\"\nLC_NUMERIC=\"POSIX\"\nLC_TIME=\"POSIX\"\nLC_COLLATE=\"POSIX\"\nLC_MONETARY=\"POSIX\"\nLC_MESSAGES=\"POSIX\"\nLC_PAPER=\"POSIX\"\nLC_NAME=\"POSIX\"\nLC_ADDRESS=\"POSIX\"\nLC_TELEPHONE=\"POSIX\"\nLC_MEASUREMENT=\"POSIX\"\nLC_IDENTIFICATION=\"POSIX\"\nLC_ALL=\n```\n\n```\nroot@kangland:/var/www/ydy# locale -a\nC\nC.UTF-8\nPOSIX\nzh_CN.utf8\nzh_HK.utf8\nzh_TW.utf8\n```\n\n**Run :** `youtube-dl -f 'best[height=360]' --restrict-filenames -i -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' PL1OKxDwI_y_AO1Lb-zO57wYdpWqhk7MUs`\n\n**Result :** [download] _/01 - _.mp4\n\nHow to fix chinese title ? \n\nThank you so much !\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e90d175436e61e207e0b0cae7f699494dcf15922', 'files': [{'path': 'youtube_dl/options.py', 'Loc': {\"(None, 'parseOpts', 22)\": {'mod': [447]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/options.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "localstack", "repo_name": "localstack", "base_commit": "3794f1e20a56f3b7bcd23f82a006e266f2a57a05", "is_iss": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/2511", "iss_label": "type: usage", "title": "Cannot connect to DynamoDB from lambda", "body": "<!-- Love localstack? Please consider supporting our collective:\r\n\ud83d\udc49 https://opencollective.com/localstack/donate -->\r\n\r\n# Type of request: This is a ...\r\n\r\n- [x] bug report\r\n- [ ] feature request\r\n\r\n# Detailed description\r\nI'm using localstack for local development. I have a DynamoDB table named `readings` and I'd like \r\nto insert items from a lambda function.\r\nI have a simple function in python runtime:\r\n\r\n```python\r\nimport os\r\nimport boto3\r\n\r\ndef lambda_handler(events, context):\r\n DYNAMODB_ENDPOINT_URL = os.environ.get(\"DYNAMODB_ENDPOINT_URL\")\r\n dynamodb = boto3.resource(\"dynamodb\", endpoint_url=DYNAMODB_ENDPOINT_URL)\r\n readings_table = dynamodb.Table(DYNAMODB_READINGS_TABLE_NAME)\r\n\r\n readings_table.put_item(Item={\"reading_id\": \"10\", \"other\": \"test\"})\r\n```\r\n\r\nI cannot figure out what is the proper endpoint url for my local DynamoDB.\r\nI have tried different combinations of `localhost`, `localstack` and ports `4566`, `4569`, each time I get error `EndpointConnectionError`\r\n\r\n## Expected behavior\r\nItems are inserted in the table.\r\n\r\n## Actual behavior\r\nLambda cannot connect to dynamodb and error `[ERROR] EndpointConnectionError: Could not connect to the endpoint URL: \"http://localstack:4569/\"` is raised.\r\n\r\n# Steps to reproduce\r\n\r\nRun localstack image with docker-compose, set `LOCALSTACK_HOSTNAME=localstack` and try to access dynamodb resource from lambda.\r\n\r\n## Command used to start LocalStack\r\ndocker-compose service I'm using:\r\n```yml\r\n localstack:\r\n image: localstack/localstack:0.11.2\r\n ports:\r\n - 4566:4566\r\n - 8080:8080\r\n environment:\r\n SERVICES: \"dynamodb,sqs,lambda,iam\"\r\n DATA_DIR: \"/tmp/localstack/data\"\r\n PORT_WEB_UI: \"8080\"\r\n LOCALSTACK_HOSTNAME: localstack\r\n LAMBDA_EXECUTOR: docker\r\n AWS_ACCESS_KEY_ID: \"test\"\r\n AWS_SECRET_ACCESS_KEY: \"test\"\r\n AWS_DEFAULT_REGION: \"us-east-1\"\r\n volumes:\r\n - localstack_volume:/tmp/localstack/data\r\n - /var/run/docker.sock:/var/run/docker.sock\r\n # When a container is started for the first time, it will execute files with extensions .sh that are found in /docker-entrypoint-initaws.d. \r\n # Files will be executed in alphabetical order. You can easily create aws resources on localstack using `awslocal` (or `aws`) cli tool in the initialization scripts.\r\n # Here I run creating dynamodb tables, roles, etc.\r\n - ./localstack-startup-scripts/:/docker-entrypoint-initaws.d/\r\n```\r\n\r\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n...\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [19], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "localstack", "repo_name": "localstack", "base_commit": "1c5f2e9650155a839cc842a9cd07faf3e76ed5d2", "is_iss": 0, "iss_html_url": "https://github.com/localstack/localstack/issues/1078", "iss_label": "", "title": "Connect to localhost:4568 [localhost/127.0.0.1] failed: Connection refused (Connection refused)", "body": "Hi there, I am having trouble connecting to Kinesis on localstack. Everything runs fine when I run it locally, the error happens inside of our Jenkins pipeline.\r\n\r\nHere is the Dockerfile I am using:\r\n```\r\nFROM hseeberger/scala-sbt:8u181_2.12.7_1.2.6\r\n\r\nUSER root\r\nRUN apt-get update\r\nRUN apt-get -y install curl\r\nRUN curl -sL https://deb.nodesource.com/setup_8.x | bash -\r\nRUN apt-get -y install nodejs\r\nRUN apt-get install npm\r\nRUN curl -L \"https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\r\nRUN chmod +x /usr/local/bin/docker-compose\r\n```\r\n\r\nAnd here is my docker-compose.yml:\r\n```\r\nversion: '3.6'\r\n\r\nservices:\r\n # AWS services in docker env\r\n localstack:\r\n image: localstack/localstack:latest\r\n environment:\r\n - SERVICES=kinesis,dynamodb,s3,cloudwatch\r\n - HOSTNAME_EXTERNAL=localstack\r\n - DATA_DIR=/tmp/localstack/data\r\n volumes:\r\n - \"/tmp:/tmp\"\r\n ports:\r\n - \"4568:4568\"\r\n - \"4569:4569\"\r\n - \"4572:4572\"\r\n - \"4582:4582\"\r\n\r\n postgres:\r\n image: \"postgres:9.6\"\r\n restart: always\r\n ports:\r\n - \"5432:5432\"\r\n environment:\r\n POSTGRES_USER: dev\r\n POSTGRES_PASSWORD: *******\r\n POSTGRES_DB: table\r\n PGPASSWORD: *******\r\n volumes:\r\n - ./docker/postgres-init:/docker-entrypoint-initdb.d\r\n\r\n mocks:\r\n image: \"jordimartin/mmock\"\r\n volumes:\r\n - \"./docker/mocks:/config\"\r\n ports:\r\n - \"8082:8082\"\r\n - \"8083:8083\"\r\n - \"8084:8084\"\r\n\r\n aws-create-stream:\r\n image: \"ivonet/aws-cli:1.0.0\"\r\n links:\r\n - localstack\r\n volumes:\r\n - ${HOME}/.aws:/root/.aws:ro\r\n command: --endpoint-url=http://localstack:4568 kinesis create-stream --stream-name RawScanPipe --shard-count 1\r\n environment:\r\n - AWS_DEFAULT_REGION=us-east-1\r\n\r\n #PGAdmin gives a nice gui on the PostgreSQL DB\r\n pgadmin:\r\n image: dpage/pgadmin4\r\n links:\r\n - postgres\r\n depends_on:\r\n - postgres\r\n ports:\r\n - \"8888:80\"\r\n volumes:\r\n - ./docker/pgadmin:/var/lib/pgadmin\r\n environment:\r\n PGADMIN_DEFAULT_EMAIL: *********\r\n PGADMIN_DEFAULT_PASSWORD: *********\r\n```\r\n\r\nIn case it matters, here is the segment in our Jenkins file where this gets called:\r\n```\r\ndef sbtInside() {\r\n return \"-u root -v /usr/bin/docker:/usr/bin/docker \" +\r\n \"-v /usr/local/bin/aws:/usr/local/bin/aws \" +\r\n \"-v /var/run/docker.sock:/var/run/docker.sock \" +\r\n \"-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/libltdl.so.7 \" +\r\n \"-v $HOME/.ivy2:/root/.ivy2 \" +\r\n \"-v $HOME/.sbt:/root/.sbt\"\r\n}\r\n\r\n stage(\"Unit/Functional Tests & Create Dockerfile\") {\r\n app.inside(sbtInside()) {\r\n try {\r\n echo \"Starting unit tests...\"\r\n sh \"TARGET=LOCAL sbt clean test\"\r\n\r\n echo \"Starting up test stack...\"\r\n sh \"docker-compose -f docker-compose.yml up -d\"\r\n\r\n echo \"Starting functional tests...\"\r\n sh \"TARGET=LOCAL \" +\r\n \"PRODUCT_ENABLED=true \" +\r\n \"sbt clean functional/test\"\r\n } finally {\r\n echo \"Tests complete!\"\r\n sh \"docker-compose -f docker-compose.yml down -v\"\r\n sh \"sbt docker\"\r\n }\r\n }\r\n }\r\n```\r\n\r\nI am sure I am missing something simple, I just can't figure out what it is!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1c5f2e9650155a839cc842a9cd07faf3e76ed5d2', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}} -{"organization": "localstack", "repo_name": "localstack", "base_commit": "1c5f2e9650155a839cc842a9cd07faf3e76ed5d2", "is_iss": 0, "iss_html_url": "https://github.com/localstack/localstack/issues/1095", "iss_label": "", "title": "Healthcheck when running in docker", "body": "I'm running localstack with docker-compose as a dependency for a service that I'm developing. The problem is that my service calls localstack before it's fully initialized. The only solution I could find so far is a hard `sleep <seconds>` at start-up, but that only works on my specific system and produces unexpected results for other developers. Can localstack expose a healthcheck, so I can have docker-compose start my service after localstack is \"healthy\"?\r\n\r\nA trimmed down version of my docker-compose.yml looks something like this:\r\n```yaml\r\nmyservice:\r\n command: \"sh -c 'sleep 10 && npm run start'\" #grrrrr\r\n depends_on:\r\n - aws\r\n # aws:\r\n # condition: service_healthy\r\naws:\r\n image: localstack/localstack\r\n environment:\r\n SERVICES: s3:81,sqs:82,ses:83\r\n HOSTNAME_EXTERNAL: aws\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1c5f2e9650155a839cc842a9cd07faf3e76ed5d2', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}} -{"organization": "localstack", "repo_name": "localstack", "base_commit": "5d11af78ae1d19560f696a9e1abb707bd115c390", "is_iss": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/4970", "iss_label": "type: bug\nstatus: triage needed\narea: configuration\naws:cloudformation\narea: networking", "title": "Lambda invocation exception", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nCreating and/or updating Lambda functions in docker does not work after updating LocalStack image to the latest version with the following error in LocalStack logs:\r\n```\r\n2021-11-20T03:33:32.357:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-2:000000000000:function:lambda-socket-local-custom-resource-apigw-cw-role result / log output:\r\n\r\n> standard_init_linux.go:228: exec user process caused: exec format error\r\n2021-11-20T03:33:32.814:INFO:localstack.services.awslambda.lambda_api: Error executing Lambda function arn:aws:lambda:us-east-2:000000000000:function:lambda-socket-local-custom-resource-apigw-cw-role: Lambda process returned with error. Result: . Output:\r\nstandard_init_linux.go:228: exec user process caused: exec format error Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 608, in run_lambda_executor\r\n result, log_output = self.execute_in_container(\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/awslambda/lambda_launcher.py.enc\", line 272, in docker_separate_execute_in_container\r\n File \"/opt/code/localstack/localstack/utils/docker_utils.py\", line 1335, in start_container\r\n raise ContainerException(\r\nlocalstack.utils.docker_utils.ContainerException: Docker container returned with exit code 1\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_api.py\", line 809, in run_lambda\r\n result = LAMBDA_EXECUTOR.execute(\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 441, in execute\r\n return do_execute()\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 431, in do_execute\r\n return _run(func_arn=func_arn)\r\n File \"/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py\", line 158, in wrapped\r\n raise e\r\n File \"/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py\", line 154, in wrapped\r\n result = func(*args, **kwargs)\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 418, in _run\r\n raise e\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 414, in _run\r\n result = self._execute(lambda_function, inv_context)\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 726, in _execute\r\n result = self.run_lambda_executor(lambda_function=lambda_function, inv_context=inv_context)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc\", line 548, in run_lambda_executor\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 649, in run_lambda_executor\r\n raise InvocationException(\r\nlocalstack.services.awslambda.lambda_executors.InvocationException: Lambda process returned with error. Result: . Output:\r\nstandard_init_linux.go:228: exec user process caused: exec format error\r\n\r\n2021-11-20T03:33:55.187:INFO:localstack_ext.services.cloudformation.service_models: Unable to fetch CF custom resource result from s3://localstack-cf-custom-resources-results/62c433d4 . Existing keys: []\r\n2021-11-20T03:33:55.189:DEBUG:localstack.utils.cloudformation.template_deployer: Error applying changes for CloudFormation stack \"lambda-socket-local\": An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist. Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1482, in _run\r\n self.do_apply_changes_in_loop(changes, stack, stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1554, in do_apply_changes_in_loop\r\n self.apply_change(change, stack, new_resources, stack_name=stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1619, in apply_change\r\n result = deploy_resource(resource_id, new_resources, stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 778, in deploy_resource\r\n result = execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 829, in execute_resource_action\r\n result = func[\"function\"](resource_id, resources, resource_type, func, stack_name)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/models/custom.py\", line 61, in create_custom_resource\r\n result=retry(fetch_result,retries=KIGak(CUSTOM_RESOURCES_RESULT_POLL_TIMEOUT/2),sleep=2)\r\n File \"/opt/code/localstack/localstack/utils/common.py\", line 812, in retry\r\n raise raise_error\r\n File \"/opt/code/localstack/localstack/utils/common.py\", line 808, in retry\r\n return function(**kwargs)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/models/custom.py\", line 58, in fetch_result\r\n return aws_utils.download_s3_object(CUSTOM_RESOURCES_RESULTS_BUCKET,result_key)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/utils/aws/aws_utils.py.enc\", line 31, in download_s3_object\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 391, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 719, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.errorfactory.NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.\r\n```\r\n\r\n### Expected Behavior\r\n\r\nLambda create and/or update operations should pass successfully all the way to the end without any errors.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n```yml\r\nservices:\r\n localstack:\r\n container_name: localstack\r\n image: localstack/localstack\r\n ports:\r\n - 443:443\r\n - 4510-4530:4510-4530\r\n - 4566:4566\r\n - 4571:4571\r\n environment:\r\n - LOCALSTACK_API_KEY=${LOCALSTACK_LICENSE}\r\n - USE_LIGHT_IMAGE=1\r\n - IMAGE_NAME=localstack/localstack\r\n - MAIN_CONTAINER_NAME=localstack\r\n - SERVICES=cloudformation,cloudfront,apigateway,apigatewayv2,iam,secretsmanager,lambda,s3,sqs,sts,ec2,kafka,elb,elbv2\r\n - DEFAULT_REGION=us-east-1\r\n - AWS_ACCESS_KEY_ID=test\r\n - AWS_SECRET_ACCESS_KEY=test\r\n - EAGER_SERVICE_LOADING=1\r\n - S3_SKIP_SIGNATURE_VALIDATION=1\r\n - DEBUG=1\r\n volumes:\r\n - /var/run/docker.sock:/var/run/docker.sock\r\n network_mode: bridge\r\n```\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\nA test case available at [GitHub](https://github.com/abbaseya/localstack-msk-lambda-test) - test command `./socket.sh`\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: macOS 12.0.1\r\n- LocalStack: latest\r\n- AWS CLI: 2.2.35\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n#4932 ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [96], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "localstack", "repo_name": "localstack", "base_commit": "c07094dbf52c947e77d952825eb4daabf409655d", "is_iss": 1, "iss_html_url": "https://github.com/localstack/localstack/issues/5516", "iss_label": "type: bug\nstatus: triage needed\nstatus: response required\naws:cognito", "title": "bug: JWT ID Token issued by cognito-idp can not be verified in v0.14.0 but can in 0.11.5", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nJWT tokens issued by cognito can not be verified.\r\n\r\n### Expected Behavior\r\n\r\nJWT tokens issues by cognito should be verifiable.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith the `localstack` script\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n`LOCALSTACK_API_KEY={MY_KEY} SERVICES=cognito-idp,iam,lambda,cloudformation,s3,s3api,sts DISABLE_CORS_CHECKS=1 localstack start`\r\n\r\n`LocalStack CLI 0.14.0.1`\r\n`LocalStack version: 0.14.0`\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\nCreate the following files in some directory:\r\n`package.json` file:\r\n```json\r\n{\r\n \"name\": \"localstack-jwt\",\r\n \"version\": \"1.0.0\",\r\n \"description\": \"\",\r\n \"main\": \"index.js\",\r\n \"scripts\": {\r\n \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\r\n },\r\n \"keywords\": [],\r\n \"author\": \"\",\r\n \"license\": \"ISC\",\r\n \"dependencies\": {\r\n \"jsonwebtoken\": \"^8.5.1\",\r\n \"jwk-to-pem\": \"^2.0.5\",\r\n \"node-fetch\": \"^2.6.7\"\r\n }\r\n}\r\n\r\n```\r\n`create-user-pool.json` file:\r\n\r\n```json\r\n{\r\n \"PoolName\": \"test\",\r\n \"Policies\": {\r\n \"PasswordPolicy\": {\r\n \"MinimumLength\": 6,\r\n \"RequireUppercase\": false,\r\n \"RequireLowercase\": false,\r\n \"RequireNumbers\": false,\r\n \"RequireSymbols\": false,\r\n \"TemporaryPasswordValidityDays\": 5\r\n }\r\n },\r\n \"AdminCreateUserConfig\": {\r\n \"AllowAdminCreateUserOnly\": false,\r\n \"UnusedAccountValidityDays\": 5\r\n }\r\n}\r\n\r\n```\r\n\r\n`localstack.js` file:\r\n```javascript\r\nconst jwkToPem = require('jwk-to-pem');\r\nconst jwt = require('jsonwebtoken');\r\nconst ps = require('process');\r\nconst fetch = require('node-fetch');\r\n(async () => {\r\n const token = ps.argv[2];\r\n console.log('<== TOKEN:', token);\r\n console.log('==> http://localhost:4566/userpool/.well-known/jwks.json')\r\n const jwksResponse = await fetch('http://localhost:4566/userpool/.well-known/jwks.json');\r\n const jwks = await jwksResponse.json();\r\n console.log('<==', jwks);\r\n\r\n let decodedToken = jwt.decode(token, { complete: true });\r\n console.log('DECODED TOKEN:', decodedToken);\r\n const publicKey = jwkToPem(jwks.keys[0]);\r\n console.log('PUBLIC KEY:', publicKey);\r\n try {\r\n const decoded = jwt.verify(token, publicKey);\r\n console.log('!!! JWT is valid');\r\n } catch (err) {\r\n console.error('!!! ERROR:', err.message);\r\n }\r\n\r\n})();\r\n```\r\n\r\n`setup.sh` file:\r\n```bash\r\n#!/bin/bash\r\necho \"Creating User Pool\"\r\nUSERNAME=user1\r\nPASSWORD=password1\r\nUSER_POOL_ID=$( aws --endpoint-url=http://localhost:4566 cognito-idp create-user-pool \\\r\n --pool-name test \\\r\n --cli-input-json file://create-user-pool.json | jq -r '.UserPool.Id' )\r\necho \"User Pool Created: ${USER_POOL_ID}\"\r\n\r\necho \"Creating User Pool Client\"\r\nCLIENT_ID=$( aws --endpoint-url=http://localhost:4566 cognito-idp create-user-pool-client \\\r\n--user-pool-id \"$USER_POOL_ID\" \\\r\n--client-name client \\\r\n--explicit-auth-flows ALLOW_USER_PASSWORD_AUTH | jq -r '.UserPoolClient.ClientId')\r\necho \"User Pool Created: ${CLIENT_ID}\"\r\n\r\necho \"Sign Up User: user1/password1\"\r\naws --endpoint-url=http://localhost:4566 cognito-idp sign-up \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --username \"$USERNAME\" \\\r\n --password \"$PASSWORD\" && echo \"Sign Up Success\" || echo \"Failed to Sign Up\"\r\n\r\necho \"Please enter confirmation code printed in terminal with 'localstack start' and hit Enter:\"\r\nread CONFIRMATION_CODE\r\n\r\naws --endpoint-url=http://localhost:4566 cognito-idp confirm-sign-up \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --username \"$USERNAME\" \\\r\n --confirmation-code \"$CONFIRMATION_CODE\" && echo \"User Confirmed\" || echo \"Unable to confirm\"\r\n\r\necho \"Authenticating User\"\r\nID_TOKEN=$( aws --endpoint-url=http://localhost:4566 cognito-idp initiate-auth \\\r\n --auth-flow USER_PASSWORD_AUTH \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --auth-parameters USERNAME=\"$USERNAME\",PASSWORD=\"$PASSWORD\" | jq -r '.AuthenticationResult.IdToken' )\r\n\r\necho \"Validating ID TOKEN\"\r\nnode localstack.js \"$ID_TOKEN\"\r\n\r\n```\r\n\r\n## Run\r\n* `npm install`\r\n* start localstack `LOCALSTACK_API_KEY={MY_KEY} SERVICES=cognito-idp,iam,lambda,cloudformation,s3,s3api,sts DISABLE_CORS_CHECKS=1 localstack start`\r\n* run `./setup.sh`\r\n* script will ask for confirmation code printed in localstack console\r\n* finally script will output `!!! ERROR: invalid signature`\r\n\r\n## Try the same with 0.11.5\r\n* `./setup.sh` will print `!!! JWT is valid`\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: MacOS Monterey 12.2.1\r\n- LocalStack: 0.14.0\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\nRepository with scripts you can use to reproduce issue: https://github.com/poul-kg/localstack-jwt", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [82], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad", "is_iss": 0, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/499", "iss_label": "Bug", "title": "raise Exception(\"`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\")", "body": "### Describe the bug\n\nFresh install on ubuntu 22,\r\nI'm using interpreter in terminal.\r\n\r\nAfter sending a prompt, at some point on the answer the program crashes\r\n```\r\n\r\n> Traceback (most recent call last):\r\n File \"/home/fauxprophet/Documents/Ops/openai/bin/interpreter\", line 8, in <module>\r\n sys.exit(cli())\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 21, in cli\r\n cli(self)\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/cli/cli.py\", line 145, in cli\r\n interpreter.chat()\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 65, in chat\r\n for _ in self._streaming_chat(message=message, display=display):\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 86, in _streaming_chat\r\n yield from terminal_interface(self, message)\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/terminal_interface/terminal_interface.py\", line 50, in terminal_interface\r\n for chunk in interpreter.chat(message, display=False, stream=True):\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 106, in _streaming_chat\r\n raise Exception(\"`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\")\r\nException: `interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\r\n\r\n```\r\n\n\n### Reproduce\n\n1. open terminal\r\n2. run cmd : \"interpreter\"\r\n3. ask something like \"can you change the color of my termninal? provide me with a few different options, and let me choose using a keystroke (1,2,3)?\"\r\n4. Wait for answers\r\n5. While answering the program crashes\n\n### Expected behavior\n\nNot crash\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.1.5\n\n### Python version\n\n3.10.12\n\n### Operating System name and version\n\nUbuntu 22\n\n### Additional context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad', 'files': [{'path': 'interpreter/core/core.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/core/core.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d", "is_iss": 0, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/15", "iss_label": "", "title": "Error: cannot import name 'cli' from 'interpreter'", "body": "```console\r\n\r\n\u2570\u2500$ uname -a\r\nLinux lab 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n\u2570\u2500$ pip --version 1 \u21b5\r\npip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)\r\n\r\n\u2570\u2500$ interpreter \r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/interpreter\", line 5, in <module>\r\n from interpreter import cli\r\nImportError: cannot import name 'cli' from 'interpreter' (unknown location)\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d', 'files': [{'path': 'interpreter/interpreter.py', 'Loc': {'(None, None, None)': {'mod': [1]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "36ec07125efec86594c91e990f68e0ab214e7edf", "is_iss": 0, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/1548", "iss_label": "", "title": "run interpreter --model ollama/qwen2.5:3b error", "body": "### Bug Description\r\n\r\nWhen executing the command `interpreter --model ollama/qwen2.5:3b`, an error occurs with the specific error message:\r\n\r\n```\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)\r\n```\r\n\r\nThis error indicates that there is an unterminated string while trying to parse a JSON string, which usually happens when the response data is incomplete or improperly formatted.\r\n\r\n### Error Log\r\n\r\n```plaintext\r\n\r\nC:\\Users\\unsia>interpreter --model ollama/qwen2.5:3b\r\n\r\n\u258c Model set to ollama/qwen2.5:3b\r\n\r\nLoading qwen2.5:3b...\r\n\r\nTraceback (most recent call last):\r\n File \"<frozen runpy>\", line 198, in _run_module_as_main\r\n File \"<frozen runpy>\", line 88, in _run_code\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\interpreter.exe\\__main__.py\", line 7, in <module>\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\start_terminal_interface.py\", line 612, in main\r\n start_terminal_interface(interpreter)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\start_terminal_interface.py\", line 560, in start_terminal_interface\r\n validate_llm_settings(\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\validate_llm_settings.py\", line 109, in validate_llm_settings\r\n interpreter.llm.load()\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 397, in load\r\n self.interpreter.computer.ai.chat(\"ping\")\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\computer\\ai\\ai.py\", line 134, in chat\r\n for chunk in self.computer.interpreter.llm.run(messages):\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 322, in run\r\n yield from run_tool_calling_llm(self, params)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\run_tool_calling_llm.py\", line 178, in run_tool_calling_llm\r\n for chunk in llm.completions(**request_params):\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 466, in fixed_litellm_completions\r\n raise first_error # If all attempts fail, raise the first error\r\n ^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 443, in fixed_litellm_completions\r\n yield from litellm.completion(**params)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\litellm\\llms\\ollama.py\", line 455, in ollama_completion_stream\r\n raise e\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\litellm\\llms\\ollama.py\", line 433, in ollama_completion_stream\r\n function_call = json.loads(response_content)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\decoder.py\", line 353, in raw_decode\r\n obj, end = self.scan_once(s, idx)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)\r\n\r\n\r\n```\r\n\r\n### Analysis Process\r\n\r\n- **Call Stack**: The error occurs in the file `litellm/llms/ollama.py` when attempting to parse the model's response using `json.loads(response_content)`.\r\n- **Potential Causes**:\r\n - The format of the data returned by the model may not meet expectations.\r\n - It might be due to network issues, server-side problems, or the model's response format being non-compliant, leading to empty or partial responses from the model.\r\n\r\n### Suggested Solutions\r\n\r\n1. **Check the Model's Response**: Ensure that the API response from the model is complete and properly formatted as JSON. Debugging can be facilitated by printing out `response_content`.\r\n2. **Catch Errors and Print More Information**: Before calling `json.loads()`, add checks to ensure that `response_content` is indeed a valid JSON string.\r\n\r\nExample Code:\r\n\r\n```python\r\nif response_content:\r\n try:\r\n parsed_data = json.loads(response_content)\r\n except json.JSONDecodeError as e:\r\n print(f\"JSON Decode Error: {e}\")\r\n print(f\"Response content: {response_content}\")\r\nelse:\r\n print(\"Empty response content\")\r\n```\r\n\r\n### Steps to Reproduce\r\n\r\nTo be filled with specific steps to reproduce this issue.\r\n\r\n### Expected Behavior\r\n\r\nTo be filled with the expected behavior from the user's perspective.\r\n\r\n### Environment Information\r\n\r\n- **Open Interpreter Version**: Open Interpreter 0.4.3 Developer Preview\r\n- **Python Version**: Python 3.11.0\r\n- **Operating System**: Windows 11\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '36ec07125efec86594c91e990f68e0ab214e7edf', 'files': [{'path': 'docs/usage/terminal/arguments.mdx', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n"}, "loctype": {"code": [], "doc": ["docs/usage/terminal/arguments.mdx"], "test": [], "config": [], "asset": []}} -{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "8fb4668dc7451ac58ac57ba587ed77194469f739", "is_iss": 1, "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/1175", "iss_label": "", "title": "Error when inporting interpreter", "body": "### Describe the bug\n\nI have the following error when I try to import interpreter:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/seba/workspace/AutoProgrammer/interpreter.py\", line 1, in <module>\r\n from interpreter import interpreter\r\n File \"/home/seba/workspace/AutoProgrammer/interpreter.py\", line 1, in <module>\r\n from interpreter import interpreter\r\nImportError: cannot import name 'interpreter' from partially initialized module 'interpreter' (most likely due to a circular import\r\n```\r\nI'm not python expert, but can't figure out what I did wrong. I installed open-interpreter with pip, pip in venv, conda but nothing helps. Other libs like crewai have no problem with imports.\n\n### Reproduce\n\n1. install open-interpreter\r\n2. inport into .py file `from interpreter import interpreter`\r\n3. run file\n\n### Expected behavior\n\nImport works\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.2.4\n\n### Python version\n\n3.11.8\n\n### Operating System name and version\n\nFedora\n\n### Additional context\n\nTested with open-interpreter `0.2.0` and `0.2.4`, python `3.10` and `3.11`", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"path": "/home/seba/workspace/AutoProgrammer/interpreter.py"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["/home/seba/workspace/AutoProgrammer/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "3bc25680529cdb6b5d407c8332e820aeb2e0b948", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/66", "iss_label": "", "title": "WebSocket error code", "body": "\r\n\"Your demonstration website has the same error, please take a look.\"", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '3bc25680529cdb6b5d407c8332e820aeb2e0b948', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "2f88cf9b2568163954ecc7c20ef9879263bfc9ba", "is_iss": 1, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/476", "iss_label": "", "title": "Error generating code. Please contact support.", "body": "I have already started the project both frontend and backend but when placing the image I get the following error \"Error generating code. Please contact support.\" Could you help me with this problem?\r\n![image](https://github.com/user-attachments/assets/a71c97fe-c3c2-419e-b036-0a74ee577279)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Other\n\u73af\u5883\u53d8\u91cf\n\u6587\u6863\u7684\u4e00\u4e2aloc\u7684\u8bef\u89e3"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "4e30b207c1ee9ddad05a37c31a11ac5a182490b7", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/270", "iss_label": "", "title": "Error configuring ANTHROPIC API KEY in.env file", "body": "I added \"ANTHROPIC_API_KEY=s****\" to the.env file\r\n\r\n\"No Anthropic API key found. Please add the environment variable ANTHROPIC_API_KEY to backend/.env\"\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '4e30b207c1ee9ddad05a37c31a11ac5a182490b7', 'files': [{'path': 'backend/config.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["backend/config.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "226af5bf4183539c97c7bab825cb9324b8c570c0", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/136", "iss_label": "", "title": "error generating code ", "body": "Error generating code. Check the Developer Console AND the backend logs for details. Feel free to open a Github issue.\r\n\r\nwhile hiiting the url and pasting the screenshot it shows below error ,am i doing it correctly \r\n<img width=\"940\" alt=\"Screenshot 2023-11-30 212304\" src=\"https://github.com/abi/screenshot-to-code/assets/152517537/38d9b1af-125b-45d4-9c4a-cbb600f5ec7d\">\r\n<img width=\"940\" alt=\"Screenshot 2023-11-30 212304\" src=\"https://github.com/abi/screenshot-to-code/assets/152517537/9c5bf85b-8109-44f7-842d-ec69dd2c49d0\">\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '226af5bf4183539c97c7bab825cb9324b8c570c0', 'files': [{'path': 'Troubleshooting.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": ["Troubleshooting.md"], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/452", "iss_label": "", "title": "build failed", "body": "**Describe the bug**\r\nDocker container Exited for `screenshot-to-code-main-frontend-1`\r\n\r\n**To Reproduce**\r\nOS: Ubuntu 22.04.4 LTS\r\nDocker Compose version v2.28.1\r\nBuild version: (commit id) b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1\r\n\r\n\r\n**Screenshots of backend AND frontend terminal logs**\r\nNginx conf\r\n```\r\n location /screenshot {\r\n proxy_set_header Host $host;\r\n proxy_set_header X-Real-IP $remote_addr;\r\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\r\n proxy_set_header X-Forwarded-Proto $scheme;\r\n proxy_send_timeout 1000;\r\n proxy_read_timeout 1000;\r\n send_timeout 1000;\r\n client_max_body_size 5M;\r\n proxy_pass http://127.0.0.1:5173;\r\n }\r\n```\r\n```\r\n~ docker logs --tail 444 screenshot-to-code-main-frontend-1\r\nyarn run v1.22.22\r\n$ vite --host 0.0.0.0\r\n\r\n VITE v4.5.0 ready in 1390 ms\r\n\r\n \u279c Local: http://localhost:5173/\r\n \u279c Network: http://172.20.0.3:5173/\r\n\r\n ERROR \r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n\r\n\r\n WARN Browserslist: caniuse-lite is outdated. Please run:\r\n npx update-browserslist-db@latest\r\n Why you should do it regularly: https://github.com/browserslist/update-db#readme\r\n\r\nfile:///app/tailwind.config.js:2\r\nmodule.exports = {\r\n^\r\n\r\nReferenceError: module is not defined\r\n at file:///app/tailwind.config.js:2:1\r\n at ModuleJobSync.runSync (node:internal/modules/esm/module_job:395:35)\r\n at ModuleLoader.importSyncForRequire (node:internal/modules/esm/loader:329:47)\r\n at loadESMFromCJS (node:internal/modules/cjs/loader:1414:24)\r\n at Module._compile (node:internal/modules/cjs/loader:1547:5)\r\n at Object..js (node:internal/modules/cjs/loader:1677:16)\r\n at Module.load (node:internal/modules/cjs/loader:1318:32)\r\n at Function._load (node:internal/modules/cjs/loader:1128:12)\r\n at TracingChannel.traceSync (node:diagnostics_channel:322:14)\r\n at wrapModuleLoad (node:internal/modules/cjs/loader:219:24)\r\n at Module.require (node:internal/modules/cjs/loader:1340:12)\r\n at require (node:internal/modules/helpers:138:16)\r\n at /app/node_modules/tailwindcss/lib/lib/load-config.js:35:27\r\n at loadConfig (/app/node_modules/tailwindcss/lib/lib/load-config.js:39:6)\r\n at getTailwindConfig (/app/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:71:116)\r\n at /app/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:100:92\r\n at /app/node_modules/tailwindcss/lib/processTailwindFeatures.js:48:11\r\n at plugins (/app/node_modules/tailwindcss/lib/plugin.js:38:69)\r\n at LazyResult.runOnRoot (/app/node_modules/postcss/lib/lazy-result.js:329:16)\r\n at LazyResult.runAsync (/app/node_modules/postcss/lib/lazy-result.js:258:26)\r\n at LazyResult.async (/app/node_modules/postcss/lib/lazy-result.js:160:30)\r\n at LazyResult.then (/app/node_modules/postcss/lib/lazy-result.js:404:17)\r\n\r\nNode.js v22.12.0\r\nerror Command failed with exit code 1.\r\ninfo Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.\r\n```\r\n![image](https://github.com/user-attachments/assets/498ddae4-247e-4641-811b-28b197c7aeef)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1', 'files': [{'path': 'frontend/tailwind.config.js', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/tailwind.config.js"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "214163b0e02176333b5543740cf6262e5da99602", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/268", "iss_label": "", "title": "model evaluation method", "body": "How to evaluate the performance of the model on generalized data, such as comparing the original screenshots with the generated results? Are there any indicators?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '214163b0e02176333b5543740cf6262e5da99602', 'files': [{'path': 'blog/evaluating-claude.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["blog/evaluating-claude.md"], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/443", "iss_label": "", "title": "ReferenceError: module is not defined", "body": "When running the frontend yarn dev command, I get the error below.\r\n\r\n\r\nSteps to reproduce the behavior:\r\n1. Go to frontend folder\r\n2. execute: `yarn`\r\n3. execute: `yarn dev`\r\n\r\n\r\nImmediately after executing the yarn dev command, I get a message that says:\r\n```\r\n ERROR 16:31:02\r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n```\r\n\r\nThen when I navigate to http://localhost:5173/, it crashes with the following output:\r\n\r\n```\r\n(base) user@192 frontend % yarn dev \r\nyarn run v1.22.22\r\nwarning ../../../package.json: No license field\r\n$ vite\r\n 16:31:00\r\n VITE v4.5.0 ready in 544 ms\r\n\r\n \u279c Local: http://localhost:5173/ 16:31:00\r\n \u279c Network: use --host to expose 16:31:00\r\n \u279c press h to show help 16:31:00\r\n\r\n ERROR 16:31:02\r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n\r\n\r\n WARN Browserslist: caniuse-lite is outdated. Please run: 16:31:37\r\n npx update-browserslist-db@latest\r\n Why you should do it regularly: https://github.com/browserslist/update-db#readme\r\n\r\n\r\n ERROR (node:91140) ExperimentalWarning: CommonJS module /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js is loading ES Module /Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js using require().\r\nSupport for loading ES Module in require() is an experimental feature and might change at any time\r\n(Use `node --trace-warnings ...` to show where the warning was created)\r\n\r\nfile:///Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js:2\r\nmodule.exports = {\r\n^\r\n\r\nReferenceError: module is not defined\r\n at file:///Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js:2:1\r\n at ModuleJobSync.runSync (node:internal/modules/esm/module_job:395:35)\r\n at ModuleLoader.importSyncForRequire (node:internal/modules/esm/loader:329:47)\r\n at loadESMFromCJS (node:internal/modules/cjs/loader:1376:24)\r\n at Module._compile (node:internal/modules/cjs/loader:1528:5)\r\n at Object..js (node:internal/modules/cjs/loader:1698:10)\r\n at Module.load (node:internal/modules/cjs/loader:1303:32)\r\n at Function._load (node:internal/modules/cjs/loader:1117:12)\r\n at TracingChannel.traceSync (node:diagnostics_channel:322:14)\r\n at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)\r\n at Module.require (node:internal/modules/cjs/loader:1325:12)\r\n at require (node:internal/modules/helpers:136:16)\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js:35:27\r\n at loadConfig (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js:39:6)\r\n at getTailwindConfig (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:71:116)\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:100:92\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/processTailwindFeatures.js:48:11\r\n at plugins (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/plugin.js:38:69)\r\n at LazyResult.runOnRoot (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:329:16)\r\n at LazyResult.runAsync (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:258:26)\r\n at LazyResult.async (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:160:30)\r\n at LazyResult.then (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:404:17)\r\n\r\nNode.js v23.3.0\r\nerror Command failed with exit code 1.\r\ninfo Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.\r\n\r\n```\r\n\r\nEdit: I am running MacOS 15.1 M2 chip.\r\nEdit 2: I only set OpenAI key, I do not intend to use both APIs.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1', 'files': [{'path': 'frontend/tailwind.config.js', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/tailwind.config.js"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/132", "iss_label": "", "title": "Why Connection closed 1006", "body": "![image](https://github.com/abi/screenshot-to-code/assets/19514719/e8d6aa4c-e133-475d-bce6-7309082c0cc2)\r\n\r\n![image](https://github.com/abi/screenshot-to-code/assets/19514719/9e00d1ef-67e2-4e13-9276-4ea4119c12cc)\r\n\r\n![image](https://github.com/abi/screenshot-to-code/assets/19514719/a15e37ce-d0aa-4dfe-896d-3eb0a96a7e63)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b', 'files': [{'path': 'backend/main.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["backend/main.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "689783eabd552151fa511e44cba90c14f3ee4dcd", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/83", "iss_label": "", "title": "code error", "body": "Hi, I tried the [online version](https://picoapps.xyz/free-tools/screenshot-to-code) of your tool with my API key but I got an error from that following screenshot \r\n\r\n![Web capture_22-11-2023_22822_www maras-it com](https://github.com/abi/screenshot-to-code/assets/482210/3c331d2e-cd22-4d65-8d4d-003468cd0c2e)\r\n\r\nwhich return this in the console :\r\n\r\n```JS\r\nWebSocket error code CloseEvent\u00a0{isTrusted: true, wasClean: false, code: 1006, reason: '', type: 'close',\u00a0\u2026}isTrusted: truebubbles: falsecancelBubble: falsecancelable: falsecode: 1006composed: falsecurrentTarget: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}defaultPrevented: falseeventPhase: 0reason: \"\"returnValue: truesrcElement: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}target: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}timeStamp: 70399.80000001192type: \"close\"wasClean: false[[Prototype]]: CloseEventcode: (...)reason: (...)wasClean: (...)constructor: \u0192 CloseEvent()Symbol(Symbol.toStringTag): \"CloseEvent\"bubbles: (...)cancelBubble: (...)cancelable: (...)composed: (...)currentTarget: (...)defaultPrevented: (...)eventPhase: (...)returnValue: (...)srcElement: (...)target: (...)timeStamp: (...)type: (...)get code: \u0192 code()get reason: \u0192 reason()get wasClean: \u0192 wasClean()[[Prototype]]: Event\r\n(anonymous) @ index-9af3e78e.js:225\r\n```\r\n\r\n<img width=\"946\" alt=\"image\" src=\"https://github.com/abi/screenshot-to-code/assets/482210/b8403fbe-fc6b-479d-92ea-5f70610b3d6c\">\r\n\r\nany idea on that topic ?\r\n\r\ndavid\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '689783eabd552151fa511e44cba90c14f3ee4dcd', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "7d6fde2deafa014dc1a90c3b1dcb2ed88680a2ff", "is_iss": 1, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/1", "iss_label": "", "title": "Error: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte", "body": "Hello, thank you for your contribution, I am having the above problem, can you help me?\r\n\r\n` File \"<frozen codecs>\", line 322, in decode\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte`", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Other\n\u73af\u5883\u53d8\u91cf"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} -{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c", "is_iss": 0, "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/150", "iss_label": "", "title": "Error generating code. Check the Developer Console AND the backend logs for details", "body": "My ChatGPT has access to GPT-VISION. and the web app loads well but when I upload an image. it returns this error 'Error generating code. Check the Developer Console AND the backend logs for details'\r\n<img width=\"466\" alt=\"error\" src=\"https://github.com/abi/screenshot-to-code/assets/100529823/97c337b7-de54-45f9-8def-f984ade50a6d\">\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c', 'files': [{'path': 'docker-compose.yml', 'Loc': {'(None, None, 20)': {'mod': [20]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "4622b3395276b37e10141fab43ffea33941ca0c2", "is_iss": 0, "iss_html_url": "https://github.com/pytorch/pytorch/issues/2384", "iss_label": "", "title": "How the grad is transferred between layer", "body": "consider a simple example here:\r\n```python\r\nimport torch\r\nfrom torch.autograd import Variable\r\n\r\ninput = Variable(torch.randn(20, 3, 28, 28), requires_grad=True)\r\nm = torch.nn.Conv2d(3, 16, 5)\r\noutput = m(input)\r\n\r\nloss = torch.sum(output)# define loss to perform backprop\r\nm.zero_grad()\r\nloss.backward()\r\n\r\nprint(type(input))\r\nprint(input.grad.size())\r\nprint(type(output))\r\nprint(output.grad)\r\n```\r\nthe output is:\r\n```\r\n<class 'torch.autograd.variable.Variable'>\r\ntorch.Size([20, 3, 28, 28])\r\n<class 'torch.autograd.variable.Variable'>\r\nNone\r\n```\r\nI find the `output.grad` is `None`. I don't know how the `input.grad` is calculated without `output.grad`.\r\nand want to know how to get the values of `output.grad`.\r\n\r\nthanks!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '4622b3395276b37e10141fab43ffea33941ca0c2', 'files': [{'path': 'torch/autograd/variable.py', 'Loc': {\"('Variable', 'retain_grad', 236)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/autograd/variable.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "2abcafcfd8beb4f6a22e08532d58f9f09c490f0f", "is_iss": 0, "iss_html_url": "https://github.com/pytorch/pytorch/issues/96983", "iss_label": "module: binaries\ntriaged\nmodule: arm", "title": "PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nPyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support, where as PyTorch 1.13.0 had support.\r\n\r\nSolution:\r\nthe wheels need to be built with the `--enable-mkldnn` option while building them from the pytorch/builder repo.\r\n\r\nexample command for pytorch wheel builder script:\r\n`./build_aarch64_wheel.py --python-version 3.8 --use-docker --keep-running --os ubuntu20_04 --enable-mkldnn --branch release/2.0`\r\n\r\nTo reproduce the issue, create c6g or c7g instance from AWS EC2, and in the below output, look for `USE_MKLDNN=`, this was ON for PyTorch 1.13.0 but OFF for PyTorch2.0.0.\r\n\r\nnon-working scenario\r\n```\r\npip install torch==2.0.0\r\n\r\ntime python3 -c \"import torch; torch.set_num_threads(8); print(torch.__version__, torch.__config__.show(), torch.get_num_threads());a=torch.rand(100, 100, 100); b=torch.rand(100,100, 100); [torch.bmm(a,b).sum() for i in range(1000)]\"\r\n2.0.0 PyTorch built with:\r\n - GCC 10.2\r\n - C++ Version: 201703\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: NO AVX\r\n - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n\r\n```\r\n\r\nworking scenario:\r\n\r\n```\r\npip3 install torch==1.13.0\r\n\r\ntime python3 -c \"import torch; torch.set_num_threads(8); print(torch.__version__, torch.__config__.show(), torch.get_num_threads());a=torch.rand(100, 100, 100); b=torch.rand(100,100, 100); [torch.bmm(a,b).sum() for i in range(1000)]\"\r\n\r\n1.13.0 PyTorch built with:\r\n - GCC 10.2\r\n - C++ Version: 201402\r\n - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: NO AVX\r\n - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_VERSION=1.13.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n \r\n\r\n\r\n```\r\n\r\n### Versions\r\n```\r\nCollecting environment information...\r\nPyTorch version: 2.0.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.5 LTS (aarch64)\r\nGCC version: (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0\r\nClang version: Could not collect\r\nCMake version: version 3.25.2\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-1028-aws-aarch64-with-glibc2.29\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: aarch64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 16\r\nOn-line CPU(s) list: 0-15\r\nThread(s) per core: 1\r\nCore(s) per socket: 16\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: ARM\r\nModel: 1\r\nStepping: r1p1\r\nBogoMIPS: 2100.00\r\nL1d cache: 1 MiB\r\nL1i cache: 1 MiB\r\nL2 cache: 16 MiB\r\nL3 cache: 32 MiB\r\nNUMA node0 CPU(s): 0-15\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; CSV2, BHB\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.2\r\n[pip3] torch==2.0.0\r\n[pip3] torchvision==0.14.1\r\n[conda] Could not collect\r\n```\r\n\r\ncc @ezyang @seemethere @malfet", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2abcafcfd8beb4f6a22e08532d58f9f09c490f0f', 'files': [{'path': '.ci/aarch64_linux/build_aarch64_wheel.py', 'Loc': {'(None, None, None)': {'mod': [8]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [".ci/aarch64_linux/build_aarch64_wheel.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "2dff0b3e918530719f7667cb31541f036a25e3f2", "is_iss": 1, "iss_html_url": "https://github.com/pytorch/pytorch/issues/48435", "iss_label": "", "title": "AttributeError: module 'torch.cuda' has no attribute 'comm'", "body": "## \u2753 Questions and Help\r\n\r\nI'm using torch 1.7.0, and get this kind of error\r\n\r\nmy torch is installed via \r\n\r\npip install torch==1.7.0+cu101 torchvision==0.8.1+cu101 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html\r\n\r\nmy os is win10", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/facebookresearch/InterHand2.6M/commit/874eb9f740ef54c275433d1bd27f8fb8f6a8f17d", "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "facebookresearch", "pro": "InterHand2.6M", "path": ["{'base_commit': '874eb9f740ef54c275433d1bd27f8fb8f6a8f17d', 'files': [{'path': 'common/nets/module.py', 'status': 'modified', 'Loc': {\"('PoseNet', 'soft_argmax_1d', 41)\": {'mod': [43]}}}]}"]}], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["common/nets/module.py"], "doc": [], "test": [], "config": [], "asset": ["InterHand2.6M"]}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "e8f6013d0349229fd8f7d298952cfe56fc4b8761", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2070", "iss_label": "bug\nstale", "title": "Liaobots and You don't work", "body": "Liaobots and You do not work, they give the following errors:\r\n\r\n```\r\nLiaobots: ResponseStatusError: Response 500: Error\r\n``` \r\n\r\n```\r\nYou: ResponseStatusError: Response 401: {\"status_code\":401,\"request_id\":\"request-id-live-183191e7-adc1-4838-8e29-6e0c5c3ca048\",\"error_type\":\"endpoint_not_authorized_for_sdk\",\"error_message\":\"The project owner has not authorized the SDK to call this endpoint. Please enable it in the dashboard to continue: https://stytch.com/dashboard/sdk-configuration.\",\"error_url\":\"https://stytch.com/docs/api/errors/401#endpoint_not_authorized_for_sdk\"}\r\n``` \r\n@xtekky @hlohaus ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e8f6013d0349229fd8f7d298952cfe56fc4b8761', 'files': [{'path': 'g4f/Provider/Liaobots.py', 'Loc': {\"('Liaobots', 'create_async_generator', 111)\": {'mod': [149]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Liaobots.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "fa2d608822540c9b73350bfa036e8822ade4e23f", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2305", "iss_label": "stale", "title": "ValueError: Unknown model: dall-e-3", "body": "```\r\nC:\\Users\\MAX\\Desktop>pip install -U g4f[all]\r\nDefaulting to user installation because normal site-packages is not writeable\r\nRequirement already satisfied: g4f[all] in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (0.3.3.2)\r\nRequirement already satisfied: requests in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.32.3)\r\nRequirement already satisfied: aiohttp in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.9.3)\r\nRequirement already satisfied: brotli in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.1.0)\r\nRequirement already satisfied: pycryptodome in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.20.0)\r\nRequirement already satisfied: curl-cffi>=0.6.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.7.3)\r\nRequirement already satisfied: cloudscraper in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.2.71)\r\nRequirement already satisfied: certifi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2024.8.30)\r\nRequirement already satisfied: browser-cookie3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.19.1)\r\nRequirement already satisfied: PyExecJS in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.5.1)\r\nRequirement already satisfied: duckduckgo-search>=5.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (6.3.2)\r\nRequirement already satisfied: beautifulsoup4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (4.12.3)\r\nRequirement already satisfied: pywebview in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (5.2)\r\nRequirement already satisfied: platformdirs in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (4.2.2)\r\nRequirement already satisfied: plyer in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.1.0)\r\nRequirement already satisfied: cryptography in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (43.0.0)\r\nRequirement already satisfied: aiohttp-socks in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.8.4)\r\nRequirement already satisfied: pillow in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (10.2.0)\r\nRequirement already satisfied: cairosvg in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.7.1)\r\nRequirement already satisfied: werkzeug in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.0.1)\r\nRequirement already satisfied: flask in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.0.2)\r\nRequirement already satisfied: loguru in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.7.2)\r\nRequirement already satisfied: fastapi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.109.2)\r\nRequirement already satisfied: uvicorn in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.27.0.post1)\r\nRequirement already satisfied: nest-asyncio in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.6.0)\r\nRequirement already satisfied: cffi>=1.12.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from curl-cffi>=0.6.2->g4f[all]) (1.17.0)\r\nRequirement already satisfied: typing-extensions in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from curl-cffi>=0.6.2->g4f[all]) (4.12.2)\r\nRequirement already satisfied: click>=8.1.7 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from duckduckgo-search>=5.0->g4f[all]) (8.1.7)\r\nRequirement already satisfied: primp>=0.6.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from duckduckgo-search>=5.0->g4f[all]) (0.6.4)\r\nRequirement already satisfied: aiosignal>=1.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.3.1)\r\nRequirement already satisfied: attrs>=17.3.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (23.2.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.4.1)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (6.0.5)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.9.4)\r\nRequirement already satisfied: python-socks<3.0.0,>=2.4.3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->g4f[all]) (2.4.4)\r\nRequirement already satisfied: soupsieve>1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from beautifulsoup4->g4f[all]) (2.5)\r\nRequirement already satisfied: lz4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from browser-cookie3->g4f[all]) (4.3.3)\r\nRequirement already satisfied: pycryptodomex in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from browser-cookie3->g4f[all]) (3.20.0)\r\nRequirement already satisfied: cairocffi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (1.6.1)\r\nRequirement already satisfied: cssselect2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (0.7.0)\r\nRequirement already satisfied: defusedxml in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (0.7.1)\r\nRequirement already satisfied: tinycss2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (1.2.1)\r\nRequirement already satisfied: pyparsing>=2.4.7 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cloudscraper->g4f[all]) (3.1.2)\r\nRequirement already satisfied: requests-toolbelt>=0.9.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cloudscraper->g4f[all]) (1.0.0)\r\nRequirement already satisfied: charset-normalizer<4,>=2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (3.3.2)\r\nRequirement already satisfied: idna<4,>=2.5 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (3.6)\r\nRequirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (2.1.0)\r\nRequirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from fastapi->g4f[all]) (2.6.1)\r\nRequirement already satisfied: starlette<0.37.0,>=0.36.3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from fastapi->g4f[all]) (0.36.3)\r\nRequirement already satisfied: Jinja2>=3.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (3.1.3)\r\nRequirement already satisfied: itsdangerous>=2.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (2.1.2)\r\nRequirement already satisfied: blinker>=1.6.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (1.7.0)\r\nRequirement already satisfied: MarkupSafe>=2.1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from werkzeug->g4f[all]) (2.1.5)\r\nRequirement already satisfied: colorama>=0.3.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from loguru->g4f[all]) (0.4.6)\r\nRequirement already satisfied: win32-setctime>=1.0.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from loguru->g4f[all]) (1.1.0)\r\nRequirement already satisfied: six>=1.10.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from PyExecJS->g4f[all]) (1.16.0)\r\nRequirement already satisfied: proxy-tools in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (0.1.0)\r\nRequirement already satisfied: bottle in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (0.13.1)\r\nRequirement already satisfied: pythonnet in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (3.0.3)\r\nRequirement already satisfied: h11>=0.8 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from uvicorn->g4f[all]) (0.14.0)\r\nRequirement already satisfied: pycparser in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cffi>=1.12.0->curl-cffi>=0.6.2->g4f[all]) (2.22)\r\nRequirement already satisfied: annotated-types>=0.4.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->g4f[all]) (0.6.0)\r\nRequirement already satisfied: pydantic-core==2.16.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->g4f[all]) (2.16.2)\r\nRequirement already satisfied: async-timeout>=3.0.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->g4f[all]) (4.0.3)\r\nRequirement already satisfied: anyio<5,>=3.4.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from starlette<0.37.0,>=0.36.3->fastapi->g4f[all]) (4.2.0)\r\nRequirement already satisfied: webencodings in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cssselect2->cairosvg->g4f[all]) (0.5.1)\r\nRequirement already satisfied: clr-loader<0.3.0,>=0.2.6 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pythonnet->pywebview->g4f[all]) (0.2.6)\r\nRequirement already satisfied: sniffio>=1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from anyio<5,>=3.4.0->starlette<0.37.0,>=0.36.3->fastapi->g4f[all]) (1.3.0)\r\n\r\nC:\\Users\\MAX\\Desktop>\r\nTraceback (most recent call last):.py\r\n File \"C:\\Users\\MAX\\Desktop\\gptimg.py\", line 4, in <module>\r\n response = client.images.generate(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\MAX\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\g4f\\client\\client.py\", line 421, in generate\r\n return asyncio.run(self.async_generate(prompt, model, response_format=response_format, **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\runners.py\", line 194, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\base_events.py\", line 687, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\MAX\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\g4f\\client\\client.py\", line 426, in async_generate\r\n raise ValueError(f\"Unknown model: {model}\")\r\nValueError: Unknown model: dall-e-3\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fa2d608822540c9b73350bfa036e8822ade4e23f', 'files': [{'path': 'g4f/models.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/models.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "1ade1d959cbc9aea7cf653bbe5b6c414ba486c97", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1292", "iss_label": "bug\nstale", "title": "RecursionError: maximum recursion depth exceeded while calling a Python object", "body": "Ubuntu 22, g4f-0.1.9.0, pip installation method, python3.10\r\n\r\n**Bug description**\r\nG4F API has these errors after 5-10 requests. I have to restart constantly. It is very uncomfortable. This problem did not exist in the previous version.\r\n\r\n**Errors**\r\n```\r\nRecursionError: maximum recursion depth exceeded in comparison\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\nRuntimeError: RetryProvider failed:\r\nYou: RecursionError: maximum recursion depth exceeded\r\nChatgpt4Online: RecursionError: maximum recursion depth exceeded in comparison\r\nChatAnywhere: RecursionError: maximum recursion depth exceeded while encoding a JSON object\r\nChatgptX: RecursionError: maximum recursion depth exceeded in comparison\r\nGptForLove: RuntimeUnavailableError: Could not find an available JavaScript runtime.\r\nChatBase: RecursionError: maximum recursion depth exceeded while encoding a JSON object\r\nGptGo: RecursionError: maximum recursion depth exceeded while calling a Python object\r\n```\r\n\r\n**Traceback**\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/api/__init__.py\", line 85, in chat_completions\r\n response = g4f.ChatCompletion.create(\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/__init__.py\", line 76, in create\r\n return result if stream else ''.join(result)\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/Provider/retry_provider.py\", line 59, in create_completion\r\n self.raise_exceptions()\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/Provider/retry_provider.py\", line 87, in raise_exceptions\r\n raise RuntimeError(\"\\n\".join([\"RetryProvider failed:\"] + [\r\nRuntimeError: RetryProvider failed:\r\nChatAnywhere: RecursionError: maximum recursion depth exceeded\r\nChatBase: RecursionError: maximum recursion depth exceeded\r\nChatgptX: RecursionError: maximum recursion depth exceeded\r\nYou: RecursionError: maximum recursion depth exceeded while calling a Python object\r\nGptGo: RecursionError: maximum recursion depth exceeded\r\nChatgpt4Online: RecursionError: maximum recursion depth exceeded\r\nGptForLove: RecursionError: maximum recursion depth exceeded\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py\", line 408, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py\", line 84, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/applications.py\", line 1106, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/applications.py\", line 122, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py\", line 184, in __call__\r\n raise exc\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py\", line 20, in __call__\r\n raise e\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py\", line 17, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 718, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 276, in handle\r\n await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 66, in app\r\n response = await func(request)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/routing.py\", line 274, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/api/__init__.py\", line 91, in chat_completions\r\n logging.exception(e)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 2113, in exception\r\n error(msg, *args, exc_info=exc_info, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 2105, in error\r\n root.error(msg, *args, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1506, in error\r\n self._log(ERROR, msg, args, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1624, in _log\r\n self.handle(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1634, in handle\r\n self.callHandlers(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1696, in callHandlers\r\n hdlr.handle(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 968, in handle\r\n self.emit(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1100, in emit\r\n msg = self.format(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 943, in format\r\n return fmt.format(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 686, in format\r\n record.exc_text = self.formatException(record.exc_info)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 636, in formatException\r\n traceback.print_exception(ei[0], ei[1], tb, None, sio)\r\n File \"/usr/lib/python3.10/traceback.py\", line 120, in print_exception\r\n for line in te.format(chain=chain):\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 248, in format\r\n yield from _ctx.emit(exc.format_exception_only())\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 64, in emit\r\n for text in text_gen:\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 335, in format_exception_only\r\n if isinstance(self.__notes__, collections.abc.Sequence):\r\n File \"/usr/lib/python3.10/abc.py\", line 119, in __instancecheck__\r\n return _abc_instancecheck(cls, instance)\r\nRecursionError: maximum recursion depth exceeded in comparison\r\n```\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1ade1d959cbc9aea7cf653bbe5b6c414ba486c97', 'files': [{'path': 'g4f/cli.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/cli.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "c159eebd494b1aef06340429b7b62cdfb84f783d", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2556", "iss_label": "bug", "title": "Errors when generating images in the following models:", "body": "Hi!\r\nerrors when generating images in the following models:\r\nResponse 404: The page could not be found\r\nsdxl, playground-v2.5, sd-3\r\n\r\n dall-e-3: Missing \"_U\" cookie\r\n \r\n midjourney: Cannot connect to host image.pollinations.ai:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c159eebd494b1aef06340429b7b62cdfb84f783d', 'files': [{'path': 'projects/windows/main.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["projects/windows/main.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "b7eee50930dbd782d7c068d1d29cd270b97bc741", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1710", "iss_label": "bug\nstale", "title": "AttributeError: module 'g4f' has no attribute 'client'", "body": "**Bug description** \r\nWhen trying to run script from Quickstart, i get this error.\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py\", line 3, in <module>\r\n engine = g4f.client.Client()\r\nAttributeError: module 'g4f' has no attribute 'client'\r\n\r\n**Environment**\r\nPython version: 3.11.7", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b7eee50930dbd782d7c068d1d29cd270b97bc741', 'files': [{'path': 'g4f/client/__init__.py', 'Loc': {}}, {'path': 'C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py'}]}", "own_code_loc": [{"path": "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["g4f/client/__init__.py"], "doc": [], "test": ["C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"], "config": [], "asset": []}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "2a54c36043b9d87b96c4b7699ce194f8523479b8", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/552", "iss_label": "bug", "title": "Unable to fetch the response, Please try again.", "body": "![IMG_20230514_171809.jpg](https://github.com/xtekky/gpt4free/assets/29172927/6263b9db-3362-4c5b-b043-80b62213a61b)\n\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2a54c36043b9d87b96c4b7699ce194f8523479b8', 'files': [{'path': 'gpt4free/you/__init__.py', 'Loc': {\"('Completion', 'create', 22)\": {'mod': [41]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt4free/you/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "c29487cdb522a2655ccff45bdfc33895ed4daf84", "is_iss": 0, "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2078", "iss_label": "bug", "title": "HuggingChat provider is not working - ResponseStatusError: Response 500", "body": "### Bug description\r\n\r\nWhen I try to use the HuggingChat provider, having added a cookies/har file, I always get the same error: `An error occurred: HuggingChat: ResponseStatusError: Response 500:`\r\n\r\n```\r\nUsing HuggingChat provider and CohereForAI/c4ai-command-r-plus model\r\nINFO:werkzeug:192.168.80.1 - - [22/Jun/2024 16:31:48] \"POST /backend-api/v2/conversation HTTP/1.1\" 200 -\r\nERROR:root:Response 500: \r\nTraceback (most recent call last):\r\n File \"/app/g4f/gui/server/api.py\", line 177, in _create_response_stream\r\n for chunk in ChatCompletion.create(**kwargs):\r\n File \"/app/g4f/providers/base_provider.py\", line 223, in create_completion\r\n yield loop.run_until_complete(await_callback(gen.__anext__))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/asyncio/base_events.py\", line 654, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"/app/g4f/providers/base_provider.py\", line 52, in await_callback\r\n return await callback()\r\n ^^^^^^^^^^^^^^^^\r\n File \"/app/g4f/Provider/HuggingChat.py\", line 99, in create_async_generator\r\n await raise_for_status(response)\r\n File \"/app/g4f/requests/raise_for_status.py\", line 28, in raise_for_status_async\r\n raise ResponseStatusError(f\"Response {response.status}: {message}\")\r\ng4f.errors.ResponseStatusError: Response 500:\r\n```\r\n\r\n### Steps to reproduce\r\n\r\n1. Put your cookies json file / har file for `huggingface.co` in the `har_and_cookies` directory\r\n2. Run gpt4free in Docker using docker compose\r\n3. Open g4f web ui (using OpenAI compatible API (port `1337`) gives the same error, though)\r\n4. Select this provider: `HuggingChat (Auth)`\r\n5. Select any model, for example `CohereForAI/c4ai-command-r-plus`\r\n6. Send any message to the LLM\r\n7. See the error\r\n\r\n### Screenshot\r\n\r\n![image](https://github.com/xtekky/gpt4free/assets/35491968/7afaf19b-4af2-4703-8bf3-c4c02eb511fc)\r\n\r\n### Environment\r\n\r\n- gpt4free version 0.3.2.0 (this git repository, commit `e8f6013d`)\r\n- docker compose\r\n- Ubuntu 22.04.4 LTS x86_64\r\n\r\n-----\r\n\r\nduplicates https://github.com/xtekky/gpt4free/issues/2053 which is closed", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c29487cdb522a2655ccff45bdfc33895ed4daf84', 'files': [{'path': 'g4f/Provider/HuggingChat.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/HuggingChat.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e", "is_iss": 0, "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/68", "iss_label": "question", "title": "default username and password of social fish", "body": "hay man the tool works fine but what is the default username and password of social fish", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} -{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "f7026b04f5e5909aa15848b25de2becd675871a9", "is_iss": 0, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/2475", "iss_label": "", "title": "Multinomial Naive Bayes: Scikit and Weka have different results", "body": "Hi All,\nI used the sklearn.naive_bayes.MultinomialNB on a toy example.\nComparing the results with WEKA, I've noticed a quite different AUC.\nScikit (0.579) - Weka (0.664)\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f7026b04f5e5909aa15848b25de2becd675871a9', 'files': [{'path': 'sklearn/cross_validation.py', 'Loc': {\"(None, 'cross_val_score', 1075)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/cross_validation.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "0ab5c678bba02888b62b777b4c757e367b3458d5", "is_iss": 0, "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/8470", "iss_label": "", "title": "How to let gbdt = GradientBoostingRegressor(), gbdt.fit(X_feature, X_label) know whether the feature of input X is categorical or numerical?", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0ab5c678bba02888b62b777b4c757e367b3458d5', 'files': [{'path': 'sklearn/preprocessing/_encoders.py', 'Loc': {\"('OneHotEncoder', None, 151)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/preprocessing/_encoders.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "184f2dba255f279697cb1d7567428b3e6403c2d0", "is_iss": 1, "iss_html_url": "https://github.com/pandas-dev/pandas/issues/3209", "iss_label": "", "title": "BUG: read_csv: dtype={'id' : np.str}: Datatype not understood", "body": "I have a CSV with several columns. The first of which is a field called `id` with entries of the type `0001`, `0002`, etc. \n\nWhen loading this file, the following works:\n\n``` python\npd.read_csv(my_path, dtype={'id' : np.int})\n```\n\nbut the following doesn't:\n\n``` python\npd.read_csv(my_path, dtype={'id' : np.str})\n```\n\nnor does this either:\n\n``` python\npd.read_csv(my_path, dtype={'id' : str})\n```\n\nI get: `Datatype not understood`\n\nThis is with `pandas-0.10.1`\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [12, 18], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\nand\n2", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "53011c3d7946dadb8274a4c5c7586ab54edf792d", "is_iss": 1, "iss_html_url": "https://github.com/meta-llama/llama/issues/48", "iss_label": "", "title": "How to run 13B model on 4*16G V100\uff1f", "body": "RuntimeError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 15.78 GiB total capacity; 14.26 GiB already allocated; 121.19 MiB free; 14.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 143) of binary: /opt/conda/envs/torch1.12/bin/python", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "fabawi", "pro": "wrapyfi"}, {"org": "modular-ml", "pro": "wrapyfi-examples_llama"}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["wrapyfi", "wrapyfi-examples_llama"]}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "7e1b864d574fe6f5ff75fa1d028feb269f7152d2", "is_iss": 1, "iss_html_url": "https://github.com/meta-llama/llama/issues/836", "iss_label": "model-usage", "title": "Failed to run llama2-13B but it worked with llama2-7B", "body": "It worked with llama2-7b. But when I tried to run the **llama2-13b** model using this `torchrun --nproc_per_node 2 example_chat_completion.py --ckpt_dir /path/to/llama-2-13b-chat/ --tokenizer_path /path/to/tokenizer.model --max_seq_len 128 --max_batch_size 4`, it didn't work.\r\n\r\nError log in brief: `RuntimeError: CUDA error: invalid device ordinal`\r\n\r\n#### Full error log\r\n```log\r\nWARNING:torch.distributed.run:\r\n*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.\r\n*****************************************\r\n> initializing model parallel with size 2\r\n> initializing ddp with size 1\r\n> initializing pipeline with size 1\r\nTraceback (most recent call last):\r\n File \"/home/alex/joy/ml/llama_playground/llama/example_chat_completion.py\", line 104, in <module>\r\n fire.Fire(main)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 141, in Fire\r\n component_trace = _Fire(component, args, parsed_flag_args, context, name)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 475, in _Fire\r\n component, remaining_args = _CallAndUpdateTrace(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 691, in _CallAndUpdateTrace\r\n component = fn(*varargs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/joy/ml/llama_playground/llama/example_chat_completion.py\", line 35, in main\r\n generator = Llama.build(\r\n ^^^^^^^^^^^^\r\n File \"/home/alex/joy/ml/llama_playground/llama/llama/generation.py\", line 92, in build\r\n torch.cuda.set_device(local_rank)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/cuda/__init__.py\", line 350, in set_device\r\n torch._C._cuda_setDevice(device)\r\nRuntimeError: CUDA error: invalid device ordinal\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n\r\nWARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41031 closing signal SIGTERM\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 41032) of binary: /home/alex/miniconda3/envs/llama/bin/python\r\nTraceback (most recent call last):\r\n File \"/home/alex/miniconda3/envs/llama/bin/torchrun\", line 33, in <module>\r\n sys.exit(load_entry_point('torch==2.0.1', 'console_scripts', 'torchrun')())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 346, in wrapper\r\n return f(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/run.py\", line 794, in main\r\n run(args)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/run.py\", line 785, in run\r\n elastic_launch(\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/launcher/api.py\", line 250, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError:\r\n============================================================\r\nexample_chat_completion.py FAILED\r\n------------------------------------------------------------\r\nFailures:\r\n <NO_OTHER_FAILURES>\r\n------------------------------------------------------------\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-10-02_12:32:27\r\n host : alex-workstation\r\n rank : 1 (local_rank: 1)\r\n exitcode : 1 (pid: 41032)\r\n error_file: <N/A>\r\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\n============================================================\r\n(\r\n```\r\n\r\n\r\n#### System Specs\r\ni9 9900K + 16G DDR4 (with 16GB swap) + 2080ti (modded version with 22GB VRAM, the card runs smoothly on Windows and Linux)\r\nOS:\r\nUbuntu 22.04 x86_64\r\nEnvironments:\r\nFrom miniconda\r\n```conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia```\r\n\r\n#### My attempt NO.1 \r\nI set the only GPU RTX 2080ti in the terminal. `export CUDA_VISIBLE_DEVICES=0` **0** is the ID of my RTX 2080ti.\r\nI looked up the GPU id by ```sudo lshw -C display```\r\n\r\nResult.\r\n```log\r\n *-display \r\n description: VGA compatible controller\r\n product: TU102 [GeForce RTX 2080 Ti Rev. A]\r\n vendor: NVIDIA Corporation\r\n physical id: 0\r\n bus info: pci@0000:01:00.0\r\n version: a1\r\n width: 64 bits\r\n clock: 33MHz\r\n capabilities: pm msi pciexpress vga_controller bus_master cap_list rom\r\n configuration: driver=nvidia latency=0\r\n resources: iomemory:2f0-2ef iomemory:2f0-2ef irq:186 memory:de000000-deffffff memory:2fe0000000-2fefffffff memory:2ff0000000-2ff1ffffff ioport:e000(size=128) memory:c0000-dffff\r\n *-display\r\n description: Display controller\r\n product: CoffeeLake-S GT2 [UHD Graphics 630]\r\n vendor: Intel Corporation\r\n physical id: 2\r\n bus info: pci@0000:00:02.0\r\n version: 02\r\n width: 64 bits\r\n clock: 33MHz\r\n capabilities: pciexpress msi pm bus_master cap_list\r\n configuration: driver=i915 latency=0\r\n resources: iomemory:2f0-2ef iomemory:2f0-2ef irq:185 memory:2ffe000000-2ffeffffff memory:2fd0000000-2fdfffffff ioport:f000(size=64)\r\n *-graphics\r\n product: EFI VGA\r\n physical id: 2\r\n logical name: /dev/fb0\r\n capabilities: fb\r\n configuration: depth=32 resolution=2560,1080\r\n```\r\nBut it's still the same error. FYI, when starting to run llama2-13B, the ram usage hadn't even reached 16GB yet.\r\n\r\nWith some testing codes using pytorch\r\n```python\r\nimport torch\r\ndevice_count = torch.cuda.device_count()\r\nprint(f\"Number of available devices: {device_count}\")\r\n\r\nfor i in range(device_count):\r\n print(f\"Device {i}: {torch.cuda.get_device_name(i)}\")\r\n```\r\noutput: \r\n**Number of available devices: 1\r\nDevice 0: NVIDIA GeForce RTX 2080 Ti**\r\n\r\nNvidia SMI info\r\n```log\r\n+---------------------------------------------------------------------------------------+\r\n| NVIDIA-SMI 535.113.01 Driver Version: 535.113.01 CUDA Version: 12.2 |\r\n|-----------------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|=========================================+======================+======================|\r\n| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:01:00.0 On | N/A |\r\n| 41% 34C P8 30W / 260W | 288MiB / 22528MiB | 12% Default |\r\n| | | N/A |\r\n+-----------------------------------------+----------------------+----------------------+\r\n \r\n+---------------------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=======================================================================================|\r\n| 0 N/A N/A 2216 G /usr/lib/xorg/Xorg 165MiB |\r\n| 0 N/A N/A 2338 G /usr/bin/gnome-shell 34MiB |\r\n| 0 N/A N/A 34805 G ...26077060,3793940789578302769,262144 82MiB |\r\n| 0 N/A N/A 44004 G ...sktop/5088/usr/bin/telegram-desktop 3MiB |\r\n+---------------------------------------------------------------------------------------+\r\n```\r\n\r\n#### My attempt NO.2\r\n\r\nChanged to Pytorch nightly and cuda 12.1 support. `conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia` My Linux is using Nvidia driver version 535.113.01 with cuda 12.2 support.\r\n\r\nPytorch version: 2.2.0.dev20231001\r\nSame error.\r\n\r\n#### My attempt NO.3\r\nDowngrade the Linux driver? (Not tested yet)\r\n\r\n#### My attempt NO.4\r\nUse the Docker version Pytorch and CUDA inside a docker instance. https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch \r\n\r\nAfter downloading the docker image, i started a docker instance by doing so `docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:23.09-py3`\r\n\r\nError\r\n`docker: Error response from daemon: could not select device driver \"\" with capabilities: [[gpu]]`\r\n\r\n\r\n\r\nHow to run llama2-13B-chat or 70B with a RTX graphics card of 22GB RAM? Thanks in advance!\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "meta-llama", "pro": "llama-cookbook", "path": ["examples/README.md", "examples/inference.py"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/inference.py"], "doc": ["examples/README.md"], "test": [], "config": [], "asset": []}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "57b0eb62de0636e75af471e49e2f1862d908d9d8", "is_iss": 0, "iss_html_url": "https://github.com/meta-llama/llama/issues/201", "iss_label": "", "title": "Torchrun distributed running does not work", "body": "Running in a distributed manner either returns an error, or with the simplest example, produce obviously incorrect output.\r\n\r\nThe following is the result of running 13B model across two nodes. Node A:\r\n\r\n`python -m torch.distributed.run --nproc_per_node 1 --nnodes=2 --node_rank=0 --master_addr=\"gpu3.lan\" --master_port=1234 example.py --ckpt_dir $MODELS/65B --tokenizer_path $MODELS/tokenizer.model`\r\n\r\nNode B:\r\n\r\n`python -m torch.distributed.run --nproc_per_node 1 --nnodes=2 --node_rank=1 --master_addr=\"gpu3.lan\" --master_port=1234 example.py --ckpt_dir $MODELS/65B --tokenizer_path $MODELS/tokenizer.model`\r\n\r\nIt does complete without error, but the results are messed up:\r\n\r\n![image](https://user-images.githubusercontent.com/252193/225178366-2c929cd0-3e87-42d4-8bb5-5cc737189959.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '57b0eb62de0636e75af471e49e2f1862d908d9d8', 'files': [{'path': 'example.py', 'Loc': {\"(None, 'setup_model_parallel', 19)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["example.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7", "is_iss": 0, "iss_html_url": "https://github.com/meta-llama/llama/issues/670", "iss_label": "", "title": "Counting tokens for Chat models", "body": "Does anyone how to calculate prompt and completion tokens for Llama Chat models for monitoring purposes?\r\nCan we add this in responses as many times we don't have libraries to achieve this in languages like java, kotlin, etc.\r\n\r\nSimilar to tiktoken by openai - https://github.com/openai/tiktoken", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7', 'files': [{'path': 'llama/tokenizer.py', 'Loc': {\"('Tokenizer', 'encode', 31)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["llama/tokenizer.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "8cd608cc019b306ab6d8b7abd61014b436968086", "is_iss": 0, "iss_html_url": "https://github.com/meta-llama/llama/issues/732", "iss_label": "download-install", "title": "download.sh problem, llama model 70B, results in several 0kb .pth files after download; two separate network locations for testing; reported by several users on different networks; MacOS Apple Silicon M2 Ventura 13.4.1 (c) (22F770820d)", "body": "After verifying that all libraries from the requirements.txt were installed in my python3 environment, in bash terminal I run llama-main/download.sh -- however, upon completion downloading (and overall execution) I am finding that one or more consolidated.0x.pth files are zero kilobytes containing no data. \r\n\r\nI have tried to successfully download all .pth files on both WiFi & Ethernet from two separate networks. One at home, on my Verizon 5g ISP and the other on-campus at MIT. The same result occurs. I have verified disk storage space on both machines I attempted to acquire these files. It seems \" consolidated.05.pth \" fails most often; with the successfully acquired .pth files being 17.25 GB in size. However this morning I am seeing that consolidated.**05**.pth, consolidated.**04**.pth, and consolidated.**00**.pth have failed\r\n\r\nI am discouraged, as I have attempted to acquire these several times and requested a Meta access key twice. \r\n\r\nAre there any recommendations you can provide me with? Other resources, endpoints, or potential port forwards/triggers that might resolve the problem in some way? \r\n\r\nor is this a **bug**?\r\n\r\nThank you for your time!!\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8cd608cc019b306ab6d8b7abd61014b436968086', 'files': [{'path': 'download.sh', 'Loc': {'(None, None, 23)': {'mod': [23]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["download.sh"]}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "99e19d4f83b7fe77e8b3b692e01019640d7b457a", "is_iss": 0, "iss_html_url": "https://github.com/meta-llama/llama/issues/493", "iss_label": "download-install", "title": "download.sh: line 2: $'\\r': command not found", "body": "run download.sh by cygwin in windows but it give back \"download.sh: line 2: $'\\r': command not found\"\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '99e19d4f83b7fe77e8b3b692e01019640d7b457a', 'files': [{'path': 'download.sh', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["download.sh"]}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "1076b9c51c77ad06e9d7ba8a4c6df775741732bd", "is_iss": 1, "iss_html_url": "https://github.com/meta-llama/llama/issues/21", "iss_label": "", "title": "Add to huggingface", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "docs", "pro": "transformers", "path": ["model_doc/llama"]}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": ["model_doc/llama"], "test": [], "config": [], "asset": []}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e", "is_iss": 0, "iss_html_url": "https://github.com/meta-llama/llama/issues/751", "iss_label": "documentation", "title": "Run llama2 on specified GPU", "body": "Suppose I have 8 A6000 GPU, I would like to run separate experiments on separate GPU, how can I do it? For example, I want to run chat_completion.py on CUDA:0 and run text_completion.py on CUDA:1 simutaneously. Are there any ways to do it? Thank you.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e', 'files': [{'path': 'example_text_completion.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nhow can I do it", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["example_text_completion.py"], "doc": [], "test": [], "config": [], "asset": []}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "a102a597d1eb5d437f98dc0b55668ff61bc493b8", "is_iss": 1, "iss_html_url": "https://github.com/meta-llama/llama/issues/740", "iss_label": "download-install", "title": "download.sh: Enter for all models fails", "body": "- Procedure\r\n`source download.sh; <enter url>; <Enter for all models>`\r\n- Result\r\nFolders etc. set up, models not downloaded. 403 Forbidden Error\r\n- TS\r\nWas able to download all models by explicitly passing names as a list", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["wget"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["wget"]}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "d7e2e37e163981fd674ea2a633fac2014550898d", "is_iss": 0, "iss_html_url": "https://github.com/meta-llama/llama/issues/795", "iss_label": "", "title": "[Question] Is the Use of Llama2 Forbidden in Languages Other Than English?", "body": "Hello,\r\n\r\nI recently came across a claim from [Baichuan-inc](https://github.com/baichuan-inc) during their live stream event and in the press release for the Baichuan2 model. They stated that Meta prohibits the use of Llama2 in languages other than English.\r\n\r\nHowever, after reviewing the [use policy](https://ai.meta.com/llama/use-policy/) and the [license agreement](https://ai.meta.com/llama/license/) provided by Meta, I couldn't find any specific restriction regarding the model's application language. Additionally, in the `Responsible-Use-Guide.pdf`, there are even mentions of considerations for markets in other languages.\r\n\r\nCould you please clarify if the statement by [Baichuan-inc](https://github.com/baichuan-inc) that \"Meta prohibits the use of Llama2 in languages other than English,\" is accurate? \r\n\r\nThank you!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd7e2e37e163981fd674ea2a633fac2014550898d', 'files': [{'path': 'MODEL_CARD.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u8be2\u95ee\u5e93\u8bed\u8a00\u652f\u6301\u4fe1\u606f", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["MODEL_CARD.md"], "test": [], "config": [], "asset": []}} -{"organization": "meta-llama", "repo_name": "llama", "base_commit": "57b0eb62de0636e75af471e49e2f1862d908d9d8", "is_iss": 1, "iss_html_url": "https://github.com/meta-llama/llama/issues/227", "iss_label": "documentation\nresearch-paper", "title": "where is the train file?", "body": "where is the train file? I want to learn how to train.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "meta-llama", "pro": "llama-cookbook", "path": ["llama_finetuning.py"]}], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code\nDoc"}, "loctype": {"code": ["llama_finetuning.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "f88765d504ce2fa9bc3926c76910b11510522892", "iss_html_url": "https://github.com/pallets/flask/issues/1224", "iss_label": "", "title": "Starting up a public server.", "body": "I ran into this problem today with one of my applications trying to make it public to my local network. \n\nC:\\Users\\Savion\\Documents\\GitHub\\Example-Flask-Website>flask\\Scripts\\python run.\npy\n- Running on http://127.0.0.1:5000/\n- Restarting with reloader\n 10.101.37.124 - - [26/Oct/2014 15:51:23] \"GET / HTTP/1.1\" 404 -\n- Running on http://0.0.0.0:5000/\n 10.101.37.124 - - [26/Oct/2014 15:51:38] \"GET / HTTP/1.1\" 404 -\n\nThe problem that i run into is the fact that this app continuously attempts to default to localhost. It is not until 2 Ctrl + C, that it goes to 0.0.0.0, then I still receive a 404 error in my browser. I do have routes that are valid when running locally. I have tried to create a new virtualenv and i still recieve the same error, I reset the firewall rule on this application. All effort that did not return rewarded.\n\nAny Ideas onto why my app makes an attempt to startup on the localhost first then moves over, but then returns a 404?\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f88765d504ce2fa9bc3926c76910b11510522892', 'files': [{'path': 'flask/views.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1\n404 error", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/views.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "2d8a21c7321a9ead8e27208b49a18f4b8b27e2c1", "iss_html_url": "https://github.com/pallets/flask/issues/834", "iss_label": "", "title": "How to get the serialized version of the session cookie in 0.10?", "body": "In version 0.9 I could simply get the value of the `session` like this: \n\n```\nflask.session.serialize()\n```\n\nBut after upgrading to 0.10 this is not working anymore.. what's the alternative? How can I get the session value?\n\n(`flask.request.cookies.get('session')` is not good for me, because I would like to get the session right after login, so it's not part of the request yet)\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2d8a21c7321a9ead8e27208b49a18f4b8b27e2c1', 'files': [{'path': 'flask/sessions.py', 'Loc': {\"('SecureCookieSessionInterface', 'get_signing_serializer', 308)\": {'mod': []}, \"('TaggedJSONSerializer', 'dumps', 60)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nhow to do \u2026", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/sessions.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "22d82e70b3647ed16c7d959a939daf533377382b", "iss_html_url": "https://github.com/pallets/flask/issues/4015", "iss_label": "", "title": "2.0.0: build requires ContextVar module", "body": "Simple I cannot find it.\r\n```console\r\n+ /usr/bin/python3 setup.py build '--executable=/usr/bin/python3 -s'\r\nTraceback (most recent call last):\r\n File \"setup.py\", line 4, in <module>\r\n setup(\r\n File \"/usr/lib/python3.8/site-packages/setuptools/__init__.py\", line 144, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"/usr/lib64/python3.8/distutils/core.py\", line 121, in setup\r\n dist.parse_config_files()\r\n File \"/usr/lib/python3.8/site-packages/setuptools/dist.py\", line 689, in parse_config_files\r\n parse_configuration(self, self.command_options,\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 121, in parse_configuration\r\n meta.parse()\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 426, in parse\r\n section_parser_method(section_options)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 399, in parse_section\r\n self[name] = value\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 184, in __setitem__\r\n value = parser(value)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 515, in _parse_version\r\n version = self._parse_attr(value, self.package_dir)\r\n File \"/usr/lib/python3.8/site-packages/setuptools/config.py\", line 349, in _parse_attr\r\n module = import_module(module_name)\r\n File \"/usr/lib64/python3.8/importlib/__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1014, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 991, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 975, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 671, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 783, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"/home/tkloczko/rpmbuild/BUILD/Flask-2.0.0/src/flask/__init__.py\", line 7, in <module>\r\n from .app import Flask\r\n File \"/home/tkloczko/rpmbuild/BUILD/Flask-2.0.0/src/flask/app.py\", line 19, in <module>\r\n from werkzeug.local import ContextVar\r\nImportError: cannot import name 'ContextVar' from 'werkzeug.local' (/usr/lib/python3.8/site-packages/werkzeug/local.py)\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '22d82e70b3647ed16c7d959a939daf533377382b', 'files': [{'path': 'setup.py', 'Loc': {'(None, None, None)': {'mod': [7]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "43e2d7518d2e89dc7ed0b4ac49b2d20211ad1bfa", "iss_html_url": "https://github.com/pallets/flask/issues/2977", "iss_label": "", "title": "Serial port access problem in DEBUG mode.", "body": "### Expected Behavior\r\n\r\nSending commands through the serial port.\r\n\r\n```python\r\napp = Flask(__name__)\r\nserialPort = serial.Serial(port = \"COM5\", baudrate=1000000,\r\n bytesize=8, timeout=2, stopbits=serial.STOPBITS_ONE)\r\n\r\nlamp = {\r\n 1 : {'name' : 'n1', 'state' : True},\r\n 2 : {'name' : 'n2', 'state' : True} \r\n}\r\n\r\n@app.route(\"/\")\r\ndef hello():\r\n templateData = {\r\n 'lamp': lamp\r\n }\r\n\r\n \r\n return render_template('main.html', **templateData)\r\n\r\n\r\n@app.route(\"/setPin/<action>\")\r\ndef action(action):\r\n\r\n if action == \"on\":\r\n\r\n serialPort.write(b\"n2c1111\\r\\n\")\r\n lamp[1][\"state\"] = True\r\n\r\n if action == \"off\":\r\n serialPort.write(b\"n2c0000\\r\\n\")\r\n lamp[1][\"state\"] = False\r\n\r\n\r\n templateData = {\r\n 'lamp': lamp\r\n }\r\n\r\n return render_template('main.html', **templateData)\r\n\r\nif __name__ == \"__main__\":\r\n app.run(host='0.0.0.0', port=5000, debug=True)\r\n```\r\n\r\n\r\n### Actual Behavior\r\n\r\nI can not access the serial port with FLASK_ENV = development and FLASK_DEBUG = 1. Everything works fine with DEBUG mode disabled.\r\n\r\n```pytb\r\nFLASK_APP = app.py\r\nFLASK_ENV = development\r\nFLASK_DEBUG = 1\r\nIn folder C:/Users/user/PycharmProjects/Ho_server\r\nC:\\Users\\user\\Anaconda3\\python.exe -m flask run\r\n * Serving Flask app \"app.py\" (lazy loading)\r\n * Environment: development\r\n * Debug mode: on\r\n * Restarting with stat\r\n * Debugger is active!\r\n * Debugger PIN: 138-068-963\r\n * Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)\r\n127.0.0.1 - - [30/Oct/2018 10:49:27] \"GET /setPin/on HTTP/1.1\" 500 -\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\flask\\_compat.py\", line 35, in reraise\r\n raise value\r\n File \"C:\\Users\\user\\PycharmProjects\\H_server\\app.py\", line 8, in <module>\r\n bytesize=8, timeout=2, stopbits=serial.STOPBITS_ONE)\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialwin32.py\", line 31, in __init__\r\n super(Serial, self).__init__(*args, **kwargs)\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialutil.py\", line 240, in __init__\r\n self.open()\r\n File \"C:\\Users\\user\\Anaconda3\\lib\\site-packages\\serial\\serialwin32.py\", line 62, in open\r\n raise SerialException(\"could not open port {!r}: {!r}\".format(self.portstr, ctypes.WinError()))\r\nserial.serialutil.SerialException: could not open port 'COM5': PermissionError(13, 'Access is denied.', None, 5)\r\n```\r\n\r\n### Environment\r\n\r\n* Python version: 3.6.5\r\n* Flask version: 1.0.2\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [7], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pallets", "repo_name": "flask", "base_commit": "1a7fd980f8579bd7d7d53c812a77c1dc64be52ba", "iss_html_url": "https://github.com/pallets/flask/issues/1749", "iss_label": "", "title": "JSONEncoder and aware datetimes", "body": "I was surprised to see that though flask.json.JSONEncoder accepts datetime objects, it ignores the timezone. I checked werkzeug.http.http_date and it can handle timezone aware dates just fine if they are passed in, but the JSONEncoder insists on transforming the datetime to a timetuple, like this\n\n `return http_date(o.timetuple())`\n\nThis means i have to convert all my dates to utc before encoding them, otherwise I should overwrite the dafault() method in the encoder. Can you help me understand why the encoder was made to function with naive dates only?\nThx\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1a7fd980f8579bd7d7d53c812a77c1dc64be52ba', 'files': [{'path': 'flask/json.py', 'Loc': {\"('JSONEncoder', 'default', 60)\": {'mod': [78]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["flask/json.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "144d43830f663808c5fbca75b797350060acf7dd", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/559", "iss_label": "", "title": "Results files saved to specific folder", "body": "Having just installed Sherlock I was surprised to see the results files are just jumbled in with everything else instead of being in their own Results folder.\r\n\r\nHaving a separate folder would keep things cleaner especially as you use it more and the number of files increases.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '144d43830f663808c5fbca75b797350060acf7dd', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 65)': {'mod': [65]}}, 'status': 'modified'}, {'path': 'sherlock/sherlock.py', 'Loc': {\"(None, 'main', 462)\": {'mod': [478]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\n+\nDoc"}, "loctype": {"code": ["sherlock/sherlock.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "7ec56895a37ada47edd6573249c553379254d14a", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1911", "iss_label": "question", "title": "How do you search for usernames? New to this. ", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE.\r\n######################################################################\r\n\r\n-->\r\n\r\n## Checklist\r\n<!--\r\nPut x into all boxes (like this [x]) once you have completed what they say.\r\nMake sure complete everything in the checklist.\r\n-->\r\n- [ ] I'm asking a question regarding Sherlock\r\n- [ ] My question is not a tech support question.\r\n\r\n**We are not your tech support**. \r\nIf you have questions related to `pip`, `git`, or something that is not related to Sherlock, please ask them on [Stack Overflow](https://stackoverflow.com/) or [r/learnpython](https://www.reddit.com/r/learnpython/)\r\n\r\n\r\n## Question\r\n\r\nASK YOUR QUESTION HERE\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7ec56895a37ada47edd6573249c553379254d14a', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "65ce128b7fd8c8915c40495191d9c136f1d2322b", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/1297", "iss_label": "bug", "title": "name 'requests' is not defined", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n<!--\r\nPut x into all boxes (like this [x]) once you have completed what they say.\r\nMake sure complete everything in the checklist.\r\n-->\r\n\r\n- [x] I'm reporting a bug in Sherlock's functionality\r\n- [x] The bug I'm reporting is not a false positive or a false negative\r\n- [x] I've verified that I'm running the latest version of Sherlock\r\n- [x] I've checked for similar bug reports including closed ones\r\n- [x] I've checked for pull requests that attempt to fix this bug\r\n\r\n## Description\r\n<!--\r\nUnable to search for usernames.\r\nERROR: Problem while attempting to access data file URL 'https://raw.githubusercontent.com/sherlock-project/sherlock/master/sherlock/resources/data.json': name 'requests' is not defined\r\n\r\nlatest, pulled today\r\n-->\r\n\r\nERROR: Problem while attempting to access data file URL 'https://raw.githubusercontent.com/sherlock-project/sherlock/master/sherlock/resources/data.json': name 'requests' is not defined\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '65ce128b7fd8c8915c40495191d9c136f1d2322b', 'files': [{'path': 'sherlock/sites.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sherlock/sites.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "f63e17066dc4881ee5a164aed60b6e8f1e9ab129", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/462", "iss_label": "environment", "title": "File \"sherlock.py\", line 24, in <module> from requests_futures.sessions import FuturesSession ModuleNotFoundError: No module named 'requests_futures'", "body": "File \"sherlock.py\", line 24, in <module>\r\n from requests_futures.sessions import FuturesSession\r\nModuleNotFoundError: No module named 'requests_futures'", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f63e17066dc4881ee5a164aed60b6e8f1e9ab129', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "sherlock-project", "repo_name": "sherlock", "base_commit": "6c6faff416896a41701aa3e24e5b5a584bd5cb44", "iss_html_url": "https://github.com/sherlock-project/sherlock/issues/236", "iss_label": "question", "title": "No module named 'torrequest'", "body": "Hi,\r\nsimilar problem to module \"requests_futures\"\r\n\r\nTraceback (most recent call last):\r\n File \"sherlock.py\", line 25, in <module>\r\n from torrequest import TorRequest\r\nModuleNotFoundError: No module named 'torrequest'\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6c6faff416896a41701aa3e24e5b5a584bd5cb44', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "c0d95fd6c2cd8ffc0738819825c3065e3c89977c", "iss_html_url": "https://github.com/keras-team/keras/issues/4954", "iss_label": "", "title": "TimeDistributed Wrapper not working with LSTM/GRU", "body": "Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on [StackOverflow](http://stackoverflow.com/questions/tagged/keras) or [join the Keras Slack channel](https://keras-slack-autojoin.herokuapp.com/) and ask there instead of filing a GitHub issue.\r\n\r\nThank you!\r\n\r\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\r\npip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\r\n\r\n- [x] If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found [here](https://www.tensorflow.org/get_started/os_setup).\r\n\r\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\r\npip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\r\n\r\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\r\n\r\n\r\nI am trying to apply word level attention to words in a document after passing the sentences through a GRU. However, the TimeDistributed Wrapper isn't working with GRU/LSTM. \r\n\r\nI get the following error \r\n\r\n```python\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/engine/topology.py\", line 569, in __call__\r\n self.add_inbound_node(inbound_layers, node_indices, tensor_indices)\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/engine/topology.py\", line 632, in add_inbound_node\r\n Node.create_node(self, inbound_layers, node_indices, tensor_indices)\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/engine/topology.py\", line 164, in create_node\r\n output_tensors = to_list(outbound_layer.call(input_tensors[0], mask=input_masks[0]))\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/layers/wrappers.py\", line 129, in call\r\n y = self.layer.call(X) # (nb_samples * timesteps, ...)\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/layers/recurrent.py\", line 201, in call\r\n input_shape = K.int_shape(x)\r\nFile \"/home/##/.local/lib/python2.7/site-packages/keras/backend/theano_backend.py\", line 128, in int_shape\r\n raise Exception('Not a Keras tensor:', x)\r\nException: ('Not a Keras tensor:', Reshape{3}.0)\r\n```\r\nThe code snipped is written below\r\n\r\n```python\r\ninput_layer = Input(shape=(document_size, img_h,), dtype='int32', name='input_layer')\r\nembedding_layer = TimeDistributed(Embedding(len(W2V), img_w, input_length=img_h, weights=[W2V], trainable=True, mask_zero=True))(input_layer)\r\ngru_word = TimeDistributed(GRU(GRU_layers[0], return_sequences=True, activation=conv_non_linear))(embedding_layer)\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c0d95fd6c2cd8ffc0738819825c3065e3c89977c', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "980a6be629610ee58c1eae5a65a4724ce650597b", "iss_html_url": "https://github.com/keras-team/keras/issues/16234", "iss_label": "type:support", "title": "Compiling model in callback causes TypeError", "body": "**System information**.\r\n- Have I written custom code (as opposed to using a stock example script provided in Keras): yes\r\n- TensorFlow version (use command below): 2.8.0 (2.4 too)\r\n- Python version: 3.7\r\n\r\n**Describe the problem**.\r\n\r\nIn a fine-tuning case I would like to do transfer-learning phase first (with fine-tuned layers frozen) and after that, all layers should be unfrozen. I wrote a callback that unfreeze the layers after few epochs. Unfortunately, after changing the layers' `trainable` attribute, the model should be recompiled - and the recompilation causes the `TypeError` (see colab). \r\n\r\nI am aware that I can workaround this by compiling and fitting model twice - for both phases separately - but the usage of callback seems more elegant to me.\r\n\r\n**Standalone code to reproduce the issue**.\r\n\r\nhttps://colab.research.google.com/drive/1u6VlH6EIQGXSp7vEIngTasp3v2EE42Wi?usp=sharing\r\n\r\n**Source code / logs**.\r\n\r\n```\r\nTypeError: 'NoneType' object is not callable\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '980a6be629610ee58c1eae5a65a4724ce650597b', 'files': [{'path': 'keras/engine/training.py', 'Loc': {\"('Model', 'make_train_function', 998)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "90f441a6a0ed4334cac53760289061818a68b7c1", "iss_html_url": "https://github.com/keras-team/keras/issues/2893", "iss_label": "", "title": "Is the cifar10_cnn.py example actually performing data augmentation?", "body": "When `datagen.fit(X_train)` is called in the [`cifar10_cnn.py` example](https://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py#L103), shouldn't it be (when `data_augmentation=True`):\n\n``` python\ndatagen.fit(X_train, augment=True)\n```\n\nas the [default value for `augment` is `False`](https://github.com/fchollet/keras/blob/master/keras/preprocessing/image.py#L410)?\n\nAlso, I am right in thinking when using `augment=True` the original (i.e. non-augmented - ignoring any normalisation/standardisation) data is not necessarily trained on? If so, I thought data augmentation is a method of artificially increasing the size of your dataset, so shouldn't we additionally be training on the non-augmented data? Thanks\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '90f441a6a0ed4334cac53760289061818a68b7c1', 'files': [{'path': 'keras/preprocessing/image.py', 'Loc': {\"('ImageDataGenerator', 'fit', 404)\": {'mod': [419, 420, 421, 422, 423]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/preprocessing/image.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "654404c2ed8db47a5361a3bff9126a16507c9c4c", "iss_html_url": "https://github.com/keras-team/keras/issues/1787", "iss_label": "", "title": "What happened to WordContextProduct?", "body": "``` python\nIn [1]: import keras\n\nIn [2]: keras.__version__\nOut[2]: '0.3.2'\n\nIn [3]: from keras.layers.embeddings import WordContextProduct\nUsing Theano backend.\n/usr/local/lib/python3.5/site-packages/theano/tensor/signal/downsample.py:5: UserWarning: downsample module has been moved to the pool module.\n warnings.warn(\"downsample module has been moved to the pool module.\")\n---------------------------------------------------------------------------\nImportError Traceback (most recent call last)\n<ipython-input-3-65e83b407b3e> in <module>()\n----> 1 from keras.layers.embeddings import WordContextProduct\n\nImportError: cannot import name 'WordContextProduct'\n```\n\nThis page now returns a 404: https://github.com/fchollet/keras/blob/master/examples/skipgram_word_embeddings.py\n\nWas this code taken out of keras, or just moved somewhere else?\n\nThanks,\n\nZach\n\n---\n\nPlease make sure that the boxes below are checked before you submit your issue. Thank you!\n- [x] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [x] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n- [x] Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).\nI'm trying something like this:\n\n``` python\nmodels = []\n\n# Word vectors\nmodel_word = Sequential()\nmodel_word.add(Embedding(1e4, 300, input_length=1))\nmodel_word.add(Reshape(dims=(300,)))\nmodels.append(model_word)\n\n# Context vectors\nmodel_context = Sequential()\nmodel_context.add(Embedding(1e4, 300, input_length=1))\nmodel_context.add(Reshape(dims=(300,)))\nmodels.append(model_context)\n\n# Combined model\nmodel = Sequential()\nmodel.add(Merge(models, mode='dot'))\nmodel.add(Dense(1))\nmodel.add(Activation('sigmoid'))\nmodel.compile(loss='mean_squared_error', optimizer=Adam(lr=0.001))\n```\n\nDoes that look reasonable? And then as input, I need to provide 2 lists of indexes?\n\"", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [54], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "8778add0d66aed64a8970c34576bf5800bc19170", "iss_html_url": "https://github.com/keras-team/keras/issues/3335", "iss_label": "", "title": "Masking the output of a conv layer", "body": "Hi,\nI am trying to apply a given mask in the output of a conv layer. The simplest form of my problem can be seen in the img\n\n![image](https://cloud.githubusercontent.com/assets/810340/17194147/e8728ad4-542c-11e6-8c60-b2949c288cec.png)\n\nThe mask should be considered as an input when training/predicting. I have already tried to use the Merge layer (mode='mul') to apply the input mask as follows:\n\n``` python\nmain_input= Input(shape=(3, 64, 64))\nmask1_input = Input(shape=(1, 64, 64))\nmask2_input = Input(shape=(1, 64, 64))\n\nconv1 = Convolution2D(1,7,7, border_mode='same')(main_input)\nmerged_model1 = Sequential()\nmerged_model1.add(Merge([conv1, mask1_input], mode='mul'))\n\nconv2 = Convolution2D(1, 7,7, border_mode='same')(main_input)\nmerged_model2 = Sequential()\nmerged_model2.add(Merge([conv2, mask2_input], mode='mul'))\n\nmodel = Sequential()\nmodel.add(Merge([merged_model1,merged_model2],mode='sum'))\n```\n\nBut it is not working, maybe because I'm trying to merge a layer with a Tensor. But even if I could do that, I don't feel this is the right way to do that. Can someone help?\n\nPlease make sure that the boxes below are checked before you submit your issue. Thank you!\n- [X] Check that you are up-to-date with the master branch of Keras. You can update with:\n pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps\n- [X] If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:\n pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8778add0d66aed64a8970c34576bf5800bc19170', 'files': [{'path': 'keras/src/models/model.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/models/model.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "ed07472bc5fc985982db355135d37059a1f887a9", "iss_html_url": "https://github.com/keras-team/keras/issues/13101", "iss_label": "type:support", "title": "model.fit : AttributeError: 'Model' object has no attribute '_compile_metrics'", "body": "**System information** \r\n- Have I written custom code (as opposed to using example directory): Yes\r\n- OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Linux Mint 19.3\r\n- TensorFlow backend (yes / no): yes\r\n- TensorFlow version: 2.0.0b1\r\n- Keras version: 2.2.4-tf\r\n- Python version: 3.6\r\n- CUDA/cuDNN version: /\r\n- GPU model and memory: GTX 940MX, 430.26\r\n\r\n**Describe the current behavior** \r\nThe model.fit() function throws a `AttributeError: 'Model' object has no attribute '_compile_metrics'` exception.\r\n\r\n**Describe the expected behavior** \r\nIt should work ?\r\n\r\n**Code to reproduce the issue** \r\n```python\r\nimport numpy as np\r\nimport tensorflow as tf\r\n\r\ninput_3D = tf.keras.Input(shape=(None, None, None, 1)) # unknown width, length and depth, 1 gray channel\r\nnetwork_3D = tf.keras.layers.Conv3D(\r\n filters = 128, # dimensionality of output space\r\n kernel_size = 5, # shape of 2D convolution window (5x5)\r\n strides = 1, # stride of convolution along all spatial dimensions\r\n padding = \"same\", data_format = \"channels_last\", # input with shape (batch, height, width, channels)\r\n activation = tf.keras.layers.LeakyReLU(alpha = 0.2), # activation function to use\r\n use_bias = True,\r\n kernel_initializer = tf.keras.initializers.TruncatedNormal(stddev = 1e-2),\r\n # initializer for the kernel weights matrix\r\n bias_initializer = 'zeros', # initializer for the bias vector\r\n input_shape = (None, None, None, 1)\r\n)(input_3D)\r\nnetwork_3D = tf.keras.layers.BatchNormalization(\r\n momentum = 0.1, # momentum + decay = 1.0\r\n epsilon = 1e-5,\r\n scale = True\r\n)(network_3D)\r\n\r\nmodel = tf.keras.Model(inputs = input_3D, outputs = network_3D)\r\nmodel.loss = tf.losses.mean_squared_error\r\nmodel.optimizer = tf.keras.optimizers.Adam(learning_rate = 0.002)\r\nv = np.zeros((100,100,100,100))\r\nl = np.zeros((100,100,100))\r\nmodel.fit(v, l, epochs = 20, batch_size = 1)\r\n``` \r\n\r\n**Other info / logs** \r\n```python\r\nTraceback (most recent call last):\r\n File \".../venv/lib/python3.6/site-packages/IPython/core/interactiveshell.py\", line 3296, in run_code\r\n exec(code_obj, self.user_global_ns, self.user_ns)\r\n File \"<ipython-input-14-a0cacfaacdab>\", line 1, in <module>\r\n history = model.fit(v, l, epochs = 20, batch_size = 1)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 643, in fit\r\n use_multiprocessing=use_multiprocessing)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training_arrays.py\", line 632, in fit\r\n shuffle=shuffle)\r\n File \".../venv/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py\", line 2385, in _standardize_user_data\r\n metrics=self._compile_metrics,\r\nAttributeError: 'Model' object has no attribute '_compile_metrics'\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ed07472bc5fc985982db355135d37059a1f887a9', 'files': [{'path': 'keras/engine/training.py', 'Loc': {\"('Model', 'compile', 40)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "a3d160b9467c99cbb27f9aa0382c759f45c8ee66", "iss_html_url": "https://github.com/keras-team/keras/issues/9741", "iss_label": "", "title": "Improve Keras Documentation User Experience for Long Code Snippets By Removing The Need For Horizontal Slide Bars", "body": "**Category**: documentation user-experience\r\n**Comment**: modify highlight.js <code></code> to wrap long documentation code snippets\r\n**Why**: eliminates the need for a user to manually click and slide a horizontal slider just to get a quick sense of what available parameters and their default values are\r\n\r\n**Context**\r\nWhile reading the documentation, and coming from a scikit-learn background, I really like how their documentation shows all the class and method parameters ([example page](http://scikit-learn.org/stable/modules/generated/sklearn.neural_network.MLPClassifier.html)). It's very helpful to quickly be able to see the default parameters.\r\n\r\nTake [Dense](https://keras.io/layers/core/#dense) for example. If the documentation looked like this (imagine this a code block, not individually highlighted lines):\r\n\r\n`keras.layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)`\r\n\r\nBenefits:\r\n- easy to read\r\n- no scrolling a horizontal slider\r\n- immediately tells me the available parameters and their default values\r\n\r\nCompare that experience to the current Keras experience:\r\n\r\n```\r\nkeras.layers.Dense(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None)\r\n```\r\n\r\nDisadvantages:\r\n- requires scrolling horizontally to see the rest\r\n- easy to lose track of where you are while scrolling\r\n- requires physical action to see everything\r\n\r\nThe Keras team no-doubt is busy with much bigger concerns than documentation formatting. One could say that the \"Arguments\" are all listed below or by clicking the \"Source\". True, however the key point I'm trying to make is usability, and quick readability. Reading through an \"Argument\"'s verbose description, or having to scroll horizontally is not quick nor an optimal experience.\r\n\r\nI'm not going to make a case for why making documentation easy-to-read is important. I think the Keras documentation **content** itself is outstanding.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a3d160b9467c99cbb27f9aa0382c759f45c8ee66', 'files': [{'path': 'docs/autogen.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["docs/autogen.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "7a12fd0f8597760cf8e1238a9b021e247693517b", "iss_html_url": "https://github.com/keras-team/keras/issues/2372", "iss_label": "", "title": "problem of save/load model", "body": "HI, \n\nThanks for making such a wonderful tool!\n\nI'm using Keras 1.0. I want to save and load the model both the arch and the parameters. So I use the method in FAQ. Here is the code.\n\n```\ndef save_model(self, model, options):\n json_string = model.to_json()\n open(options['file_arch'], 'w').write(json_string)\n model.save_weights(options['file_weight'])\n\ndef load_model(self, options):\n self.model = model_from_json(open(options['file_arch']).read())\n self.model.load_weights(options['file_weight'])\n return self.model\n```\n\nWhen I load model and use model.predict(), there is a error:\nAttributeError: 'NoneType' object has no attribute 'predict'\n\nDon't know why. If I don't load the model from file, just train a model and use it, everything seems ok.\n\nI checked the issues, most people just need to load the parameters. Is it possible when I load the architecture, I overwrite the old model and loose the model.predict()?\n\nThanks again for making Keras!\n\nBen\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7a12fd0f8597760cf8e1238a9b021e247693517b', 'files': [{'path': 'keras/src/trainers/trainer.py', 'Loc': {\"('Trainer', 'compile', 40)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/trainers/trainer.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "284ef7b495a61238dccc6149996c4cb88fef1c5a", "iss_html_url": "https://github.com/keras-team/keras/issues/933", "iss_label": "", "title": "Same model but graph gives bad performance", "body": "Hello, \n\nI am learning to use Graph as it seems more powerful so I implemented one of my previous model which uses Sequential. Here is the model using sequential (number of dimension set in random):\n\n```\ndef build_generation_embedding_model(self, dim):\n print \"Build model ...\"\n input_model = Sequential()\n input_model.add(TimeDistributedDense(dim, input_shape=(10,10)))\n input_model.add(LSTM(dim, return_sequences=False))\n input_model.add(Dense(dim))\n canonical_model = Sequential()\n canonical_model.add(TimeDistributedDense(dim, input_shape=(15,15)))\n canonical_model.add(LSTM(dim, return_sequences=False))\n canonical_model.add(Dense(dim))\n self.model = Sequential()\n self.model.add(Merge([input_model, canonical_model], mode='concat'))\n self.model.add(Dense(15))\n self.model.add(Activation('softmax'))\n self.model.compile(loss='categorical_crossentropy', optimizer='rmsprop')\n```\n\nThe model works fine and below is my reimplementation using Graph:\n\n```\ndef build_generation_embedding_model_graph(self, dim):\n self.model = Graph()\n self.model.add_input(name='input1', input_shape=(10,10))\n self.model.add_input(name='canonical', input_shape=(15,15))\n self.model.add_node(TimeDistributedDense(dim), name='Embed_input1', input='input1')\n self.model.add_node(TimeDistributedDense(dim), name='Embed_canonical', input='canonical')\n self.model.add_node(LSTM(dim, return_sequences=False), name='Hidden_input1', input='Embed_input1')\n self.model.add_node(LSTM(dim, return_sequences=False), name='Hidden_canonical', input='Embed_canonical')\n self.model.add_node(Dense(15), name='merge', inputs=['Hidden_input1','Hidden_canonical'], merge_mode='concat')\n self.model.add_node(Activation('softmax'), name='activation', input='merge')\n self.model.add_output(name='output', input='merge')\n self.model.compile('rmsprop', {'output':'categorical_crossentropy'})\n```\n\nMy impression is that they are exactly the same model (grateful if somebody spotted something wrong there). But the model based on Graph gives a loss of 3.6 while the loss for the other one is around 0.002. \n\nIs there a reason for this please ?\n\nThank you for your help\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [36], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "c2b844ba2fe8d0d597da9ef6a9af3b20d18d0bec", "iss_html_url": "https://github.com/keras-team/keras/issues/7603", "iss_label": "", "title": "Loss Increases after some epochs ", "body": "I have tried different convolutional neural network codes and I am running into a similar issue. The network starts out training well and decreases the loss but after sometime the loss just starts to increase. I have shown an example below: \r\nEpoch 15/800\r\n1562/1562 [==============================] - 49s - loss: 0.9050 - acc: 0.6827 - val_loss: 0.7667 - val_acc: 0.7323\r\nEpoch 16/800\r\n1562/1562 [==============================] - 49s - loss: 0.8906 - acc: 0.6864 - val_loss: 0.7404 - val_acc: 0.7434\r\nEpoch 380/800\r\n1562/1562 [==============================] - 49s - loss: 1.5519 - acc: 0.4880 - val_loss: 1.4250 - val_acc: 0.5233\r\nEpoch 381/800\r\n1562/1562 [==============================] - 48s - loss: 1.5416 - acc: 0.4897 - val_loss: 1.5032 - val_acc: 0.4868\r\nEpoch 800/800\r\n1562/1562 [==============================] - 49s - loss: 1.8483 - acc: 0.3402 - val_loss: 1.9454 - val_acc: 0.2398\r\n\r\nI have tried this on different cifar10 architectures I have found on githubs. I am training this on a GPU Titan-X Pascal. This only happens when I train the network in batches and with data augmentation. I have changed the optimizer, the initial learning rate etc. I have also attached a link to the code. I just want a cifar10 model with good enough accuracy for my tests, so any help will be appreciated. The code is from this:\r\nhttps://github.com/fchollet/keras/blob/master/examples/cifar10_cnn.py\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c2b844ba2fe8d0d597da9ef6a9af3b20d18d0bec', 'files': [{'path': 'examples/cifar10_cnn.py', 'Loc': {'(None, None, None)': {'mod': [65]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/cifar10_cnn.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "530eff62e5463e00d73e72c51cc830b9ac3a14ab", "iss_html_url": "https://github.com/keras-team/keras/issues/3997", "iss_label": "", "title": "Using keras for Distributed training raise RuntimeError(\"Graph is finalized and cannot be modified.\")", "body": "I'm using keras for distributed training with following code:\n\n``` python\n#!/usr/bin/env python\n# -*- coding:utf-8 -*-\n# Created by Enigma on 2016/9/26\n\nimport numpy as np\nimport tensorflow as tf\n\n# Define Hyperparameters\nFLAGS = tf.app.flags.FLAGS\n\n# For missions\ntf.app.flags.DEFINE_string(\"ps_hosts\", \"\",\n \"Comma-separated list of hostname:port pairs\")\ntf.app.flags.DEFINE_string(\"worker_hosts\", \"\",\n \"Comma-separated list of hostname:port pairs\")\ntf.app.flags.DEFINE_string(\"job_name\", \"\", \"One of 'ps', 'worker'\")\ntf.app.flags.DEFINE_integer(\"task_index\", 0, \"Index of task within the job\")\n\n# Hyperparameters\n\nfrom keras import backend as K\nfrom keras.layers import Input, Dense\nfrom keras.models import Model\n\n\ndef main(_):\n ps_hosts = FLAGS.ps_hosts.split(\",\")\n worker_hosts = FLAGS.worker_hosts.split(\",\")\n cluster = tf.train.ClusterSpec({\"ps\": ps_hosts, \"worker\": worker_hosts})\n\n server_config = tf.ConfigProto(\n gpu_options=tf.GPUOptions(allow_growth=True),\n log_device_placement=True)\n server = tf.train.Server(cluster, config=server_config,\n job_name=FLAGS.job_name, task_index=FLAGS.task_index)\n\n if FLAGS.job_name == \"ps\":\n server.join()\n elif FLAGS.job_name == \"worker\":\n with tf.device(tf.train.replica_device_setter(\n worker_device=\"/job:worker/task:%d/cpu:0\" % FLAGS.task_index,\n cluster=cluster)):\n global_step = tf.Variable(0, name='global_step', trainable=False)\n inputs = Input(shape=[1, ])\n hidden = Dense(10, activation='relu')(inputs)\n output = Dense(1, activation='sigmoid')(hidden)\n model = Model(input=inputs, output=output)\n\n saver = tf.train.Saver()\n summary_op = tf.merge_all_summaries()\n\n sv = tf.train.Supervisor(is_chief=(FLAGS.task_index == 0),\n logdir=\"./checkpoint/\",\n # init_op=init_op,\n summary_op=summary_op,\n saver=saver,\n global_step=global_step,\n save_model_secs=60)\n with sv.managed_session(server.target) as sess:\n step = 0\n K.set_session(sess)\n model.compile(optimizer='sgd', loss='mse')\n while step < 1000000:\n train_x = np.random.randn(1)\n train_y = 2 * train_x + np.random.randn(1) * 0.33 + 10\n model.fit(train_x, train_y)\n sv.stop()\n\nif __name__ == \"__main__\":\n tf.app.run()\n```\n\nthen I run it with:\n\n```\n/opt/anaconda3/bin/python /cache/allenwoods/keras_dis_test.py --ps_hosts=0.0.0.0:48636 --worker_hosts=0.0.0.0:46261 --job_name=ps --task_index=0\n/opt/anaconda3/bin/python /cache/allenwoods/keras_dis_test.py --ps_hosts=0.0.0.0:48636 --worker_hosts=0.0.0.0:46261 --job_name=worker --task_index=0\n```\n\nit doesn't work and return\n\n```\nTraceback (most recent call last):\n File \"/cache/allenwoods/keras_dis_test.py\", line 73, in <module>\n tf.app.run()\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/platform/app.py\", line 30, in run\n sys.exit(main(sys.argv[:1] + flags_passthrough))\n File \"/cache/allenwoods/keras_dis_test.py\", line 69, in main\n model.fit(train_x, train_y)\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 969, in managed_session\n self.stop(close_summary_writer=close_summary_writer)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 797, in stop\n stop_grace_period_secs=self._stop_grace_secs)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/coordinator.py\", line 386, in join\n six.reraise(*self._exc_info_to_raise)\n File \"/opt/anaconda3/lib/python3.5/site-packages/six.py\", line 686, in reraise\n raise value\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/training/supervisor.py\", line 959, in managed_session\n yield sess\n File \"/cache/allenwoods/VRLforTraffic/src/missions/keras_dis_test.py\", line 65, in main\n model.compile(optimizer='sgd', loss='mse')\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/engine/training.py\", line 484, in compile\n self.optimizer = optimizers.get(optimizer)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/optimizers.py\", line 580, in get\n instantiate=True, kwargs=kwargs)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/utils/generic_utils.py\", line 18, in get_from_module\n return res()\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/optimizers.py\", line 134, in __init__\n self.iterations = K.variable(0.)\n File \"/opt/anaconda3/lib/python3.5/site-packages/keras/backend/tensorflow_backend.py\", line 149, in variable\n v = tf.Variable(value, dtype=_convert_string_dtype(dtype), name=name)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 215, in __init__\n dtype=dtype)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 327, in _init_from_args\n self._snapshot = array_ops.identity(self._variable, name=\"read\")\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 3645, in get_controller\n yield default\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/contextlib.py\", line 77, in __exit__\n self.gen.throw(type, value, traceback)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2891, in name_scope\n yield \"\" if new_stack is None else new_stack + \"/\"\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 4150, in name_scope\n yield scope\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/ops/variables.py\", line 293, in _init_from_args\n initial_value, name=\"initial_value\", dtype=dtype)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 657, in convert_to_tensor\n ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py\", line 180, in _constant_tensor_conversion_function\n return constant(v, dtype=dtype, name=name)\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/constant_op.py\", line 167, in constant\n attrs={\"value\": tensor_value, \"dtype\": dtype_value}, name=name).outputs[0]\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2339, in create_op\n self._check_not_finalized()\n File \"/opt/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/ops.py\", line 2080, in _check_not_finalized\n raise RuntimeError(\"Graph is finalized and cannot be modified.\")\nRuntimeError: Graph is finalized and cannot be modified.\n```\n\nI wondering if it happens because keras' model wasn't created as part of the graph used in tf.train.Supervisor, but I have not a clue on how to prove it or fix it. Any idea\uff1f\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '530eff62e5463e00d73e72c51cc830b9ac3a14ab', 'files': [{'path': 'keras/engine/training.py', 'Loc': {\"('Model', '_make_train_function', 685)\": {'mod': []}, \"('Model', '_make_test_function', 705)\": {'mod': []}, \"('Model', '_make_predict_function', 720)\": {'mod': []}}, 'status': 'modified'}, {'path': 'keras/backend/tensorflow_backend.py', 'Loc': {\"(None, 'manual_variable_initialization', 31)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/engine/training.py", "keras/backend/tensorflow_backend.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "c2e36f369b411ad1d0a40ac096fe35f73b9dffd3", "iss_html_url": "https://github.com/keras-team/keras/issues/4810", "iss_label": "", "title": "Parent module '' not loaded, cannot perform relative import with vgg16.py", "body": "just set up my ubuntu and have the python 3.5 installed, together with Keras...the following occurs:\r\n\r\nRESTART: /usr/local/lib/python3.5/dist-packages/keras/applications/vgg16.py \r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.5/dist-packages/keras/applications/vgg16.py\", line 14, in <module>\r\n from ..models import Model\r\nSystemError: Parent module '' not loaded, cannot perform relative import\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c2e36f369b411ad1d0a40ac096fe35f73b9dffd3', 'files': [{'path': 'keras/applications/vgg16.py', 'Loc': {'(None, None, None)': {'mod': [14, 15, 16, 17, 18, 19, 20, 21]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/applications/vgg16.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "keras-team", "repo_name": "keras", "base_commit": "da86250e5a95a7adccabd8821b0d51508c82bddc", "iss_html_url": "https://github.com/keras-team/keras/issues/18439", "iss_label": "stat:awaiting response from contributor\nstale\ntype:Bug", "title": "Problem with framework agnostic KerasVariable slicing with another KerasVariable", "body": "I defined a KerasVariable with shape (n,d) in a `keras.Layer()` using `self.add_weight()`. I've also defined another KerasVariable with shape (1) , dtype=\"int32\", and value 0. \r\n\r\n```\r\nself.first_variable = self.add_weight(\r\n initializer=\"zeros\", shape=(self.N,input_shape[-1]), trainable=False\r\n)\r\nself.second_variable = self.add_weight(initializer=\"zeros\",shape=(1), trainable=False, dtype=\"int32\")\r\n```\r\n\r\nDuring a call to this custom layer, I'm trying to retrieve a specific index of the first variable using the 2nd variable with:\r\n\r\n`self.first_variable[self.second_variable.value]`\r\n\r\nThis works as expected in pytorch backend, but throws an error in tensorflow backend.\r\n\r\n```\r\nOnly integers, slices (`:`), ellipsis (`...`), tf.newaxis (`None`) and scalar tf.int32/tf.int64 tensors are valid indices, got <tf.Variable 'custom_layer/variable_1:0' shape=(1,) dtype=int32>\r\n\r\nArguments received by CustomLayer.call():\r\n \u2022 x=tf.Tensor(shape=(None, 1600), dtype=float32)\r\n \u2022 training=True\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'da86250e5a95a7adccabd8821b0d51508c82bddc', 'files': [{'path': 'keras/src/ops/core.py', 'Loc': {\"(None, 'slice', 388)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["keras/src/ops/core.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "09d9f63c98f9c4fc0953dd3fd6fb4589e9e1f6f3", "iss_html_url": "https://github.com/nvbn/thefuck/issues/376", "iss_label": "", "title": "Shell history polution", "body": "I haven't used this, but I just thought maybe this is not such a good idea because it's going to make traversing shell history really irritating. Does this do anything to get around that, or are there any workarounds?\n\nIf not, I know in zsh you can just populate the command line with whatever you want using LBUFFER and RBUFFER. What if you made it an option to type \"fuck\" then hit ctrl-F (for \"fuck\"), and it would just replace your command line with the correction, and if there's multiple candidates cycle through them by hitting ctrl-F again. That also lets you edit the correction however you want as well.\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ohmyzsh", "pro": "ohmyzsh", "path": ["plugins/thefuck"]}], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["plugins/thefuck"]}} +{"organization": "nvbn", "repo_name": "thefuck", "base_commit": "6975d30818792f1b37de702fc93c66023c4c50d5", "iss_html_url": "https://github.com/nvbn/thefuck/issues/1087", "iss_label": "", "title": "Thinks 'sl' is install python softlayer ", "body": "\r\n![image](https://user-images.githubusercontent.com/13007697/81414970-66971080-910d-11ea-8a44-da5ab9ca77f9.png)\r\nAh, yes. This wasn't a mis-spelling of ls at all, but me installing Python-Softlayer.\r\n\r\n\r\nThe output of `thefuck --version` (something like `The Fuck 3.1 using Python\r\n3.5.0 and Bash 4.4.12(1)-release`):\r\n\r\n The Fuck 3.30 using Python 3.8.2 and Bash 5.0.16(1)-release\r\n\r\n\r\nYour system (Debian 7, ArchLinux, Windows, etc.):\r\n\r\nManjaro\r\n\r\nHow to reproduce the bug:\r\n\r\nType sl in the terminal, then fuck\r\n\r\n\r\n\r\n<!-- It's only with enough information that we can do something to fix the problem. -->\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6975d30818792f1b37de702fc93c66023c4c50d5', 'files': [{'path': 'thefuck/rules/sl_ls.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["thefuck/rules/sl_ls.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "921778a7cfa442409d17ab946c5f579e308c4f2b", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/404", "iss_label": "invalid", "title": "api\u8c03\u7528\u65f6\uff0c\u56de\u7b54\u7684\u5185\u5bb9\u4e2d\u51fa\u73b0\u83ab\u540d\u5176\u5999\u7684\u81ea\u52a8\u95ee\u7b54", "body": "\u4f7f\u7528\u7684baichuan-13b\u6a21\u578b\r\n\u4f7f\u7528\u7684scr/api_demo.py\r\n\u63d0\u95ee\u5185\u5bb9\u4e3a\uff1a\u4f60\u597d\r\n\u56de\u7b54\u4f1a\u5982\u56fe\r\n![image](https://github.com/hiyouga/LLaMA-Efficient-Tuning/assets/26214176/0d2beb92-e3b4-4126-a84f-d30bde97a194)\r\n\r\n\u4e0d\u660e\u767d\u4e3a\u4ec0\u4e48\u4f1a\u51fa\u73b0\u81ea\u52a8\u7684\u591a\u8f6e\u81ea\u6211\u95ee\u7b54", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '921778a7cfa442409d17ab946c5f579e308c4f2b', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nreadme\u4e2d\u63d0\u53ca", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "984b202f835d6f3f4869cbb1f0460bb2d9163fc1", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/6562", "iss_label": "solved", "title": "Batch Inference Error for qwen2vl Model After Full Fine-Tuning", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n- `llamafactory` version: 0.9.2.dev0\r\n- Python version: 3.8.20\r\n- PyTorch version: 2.4.1+cu121 (GPU)\r\n- Transformers version: 4.46.1\r\n- Datasets version: 3.1.0\r\n- Accelerate version: 1.0.1\r\n- PEFT version: 0.12.0\r\n- TRL version: 0.9.6\n\n### Reproduction\n\n\r\nI have fine-tuned the qwen2vl model using the command:\r\n\r\n```python\r\nllamafactory-cli train examples/train_full/qwen2vl_full_sft.yaml\r\n```\r\nAfter saving the model in the \"saves\" directory, I attempted to perform batch inference using the provided script:\r\n\r\n```python\r\npython scripts/vllm_infer.py --model_name_or_path path_to_merged_model --dataset alpaca_en_demo\r\n```\r\nHowever, I encountered the following error:\r\n\r\n```python\r\nValueError: This model does not support image input.\r\n```\r\n\r\n1.The model_path I used points to the model saved after running the full fine-tuning script.\r\n2.I have successfully used the LoRA fine-tuned model (trained with the lora_sft script and merged with merge_lora script), which allows for inference using the method provided in the qwen2vl documentation.\r\n3.However, the model saved after full fine-tuning does not seem to support direct inference in the same way.\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '984b202f835d6f3f4869cbb1f0460bb2d9163fc1', 'files': [{'path': 'scripts/vllm_infer.py', 'Loc': {\"(None, 'vllm_infer', 38)\": {'mod': [43]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/vllm_infer.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "4ed2b629a51ef58d229c795e85238d40346ecb58", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/5478", "iss_label": "solved", "title": "Can we set default_system in yaml file when training?", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n- `llamafactory` version: 0.8.4.dev0\r\n- Platform: Linux-5.4.0-125-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- PyTorch version: 2.4.0+cu121 (GPU)\r\n- Transformers version: 4.44.2\r\n- Datasets version: 2.21.0\r\n- Accelerate version: 0.33.0\r\n- PEFT version: 0.12.0\r\n- TRL version: 0.9.6\r\n- GPU type: NVIDIA A800-SXM4-80GB\r\n- DeepSpeed version: 0.15.0\n\n### Reproduction\n\n llamafactory-cli train\n\n### Expected behavior\n\nWe do not need the `default_system` in `template.py`.\r\nSet `default_system` in training yaml file to overwrite so we do not need to modify the source code in `template.py`.\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '4ed2b629a51ef58d229c795e85238d40346ecb58', 'files': [{'path': 'data/', 'Loc': {}}, {'path': 'data/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["data/"]}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "18c6e6fea9dcc77c03b36301efe2025a87e177d5", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1971", "iss_label": "solved", "title": "llama'response repeat input then the answer", "body": "### Reminder\n\n- [ ] I have read the README and searched the existing issues.\n\n### Reproduction\n\n input_ids = tokenizer([\"[INST] \" +{text}\" + \" [/INST]\"], return_tensors=\"pt\", add_special_tokens=False).input_ids.to('cuda')\r\n\r\n generate_input = {\r\n \"input_ids\": input_ids,\r\n \"max_new_tokens\": 512,\r\n \"do_sample\": True,\r\n \"top_k\": 10,\r\n \"top_p\": 0.95,\r\n \"temperature\": 0.01,\r\n \"repetition_penalty\": 1.3,\r\n \"eos_token_id\": tokenizer.eos_token_id,\r\n \"bos_token_id\": tokenizer.bos_token_id,\r\n \"pad_token_id\": tokenizer.pad_token_id\r\n }\r\n\r\n generate_ids = model.generate(**generate_input)\r\n response = tokenizer.decode(generate_ids[0], skip_special_tokens=True)\r\n print(response)\n\n### Expected behavior\n\nI expect that llama just response the answer. for example,\r\ninput is \"[INST] how are you [/INST]\", output \"**I am fine**\"\r\nbut it repeat the input, the output is \"**[INST] how are you [/INST] I am fine**\"\r\n\n\n### System Info\n\n_No response_\n\n### Others\n\nDo you have any suggestions? This behaviour will limit the speed of the output and I wonder why this happen?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '18c6e6fea9dcc77c03b36301efe2025a87e177d5', 'files': [{'path': 'src/llmtuner/chat/chat_model.py', 'Loc': {\"('ChatModel', 'chat', 88)\": {'mod': [102]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/chat/chat_model.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "13eb365eb768f30d46967dd5ba302ab1106a96b6", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1443", "iss_label": "solved", "title": "deepspeed\u8fdb\u884c\u4e00\u673a\u516b\u5361\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u76f4\u63a5\u914d\u7f6e--checkpoint_dir\u53c2\u6570\uff0c\u4ec5\u52a0\u8f7dmodel\u6743\u91cd\uff0c\u65e0\u6cd5\u52a0\u8f7doptimizer\u6743\u91cd", "body": "\u5728\u914d\u7f6ebaichuan2\u6a21\u578b\u5728\u56fa\u5b9a `step` \u8fdb\u884c\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u5e0c\u671b\u540c\u65f6\u52a0\u8f7dmp_rank_00_model_states.pt\u4ee5\u53cazero_pp_rank_*_mp_rank_00_optim_states.pt\r\n\r\n\u7136\u800c\uff0c\u5728\u4f7f\u7528\u5982\u4e0b\u547d\u4ee4 `--checkpoint_dir` \u542f\u52a8\u65ad\u70b9\u7eed\u8bad\u65f6\uff0c\u5e76\u6ca1\u6709\u8f7d\u5165\u4f18\u5316\u5668zero_pp_rank_*_mp_rank_00_optim_states.pt\r\n`\r\ndeepspeed --num_gpus ${NUM_GPUS_PER_WORKER} src/train_bash.py \\\r\n --stage sft \\\r\n --model_name_or_path /xxxxxxxxxx/model_weight \\\r\n --deepspeed ./ds_config.json \\\r\n --do_train \\\r\n --dataset alpaca_gpt4_en \\\r\n --template default \\\r\n --checkpoint_dir /xxxxxxxxxxxxxxxxx/output_sft/checkpoint-1 \\\r\n --finetuning_type full \\\r\n --output_dir ./output_sft \\\r\n --overwrite_cache \\\r\n --overwrite_output_dir \\\r\n --per_device_train_batch_size 2 \\\r\n --gradient_accumulation_steps 16 \\\r\n --lr_scheduler_type cosine \\\r\n --logging_steps 1 \\\r\n --save_steps 10000 \\\r\n --learning_rate 5e-5 \\\r\n --num_train_epochs 10.0 \\\r\n --plot_loss \\\r\n --fp16 | tee logs/train_g16_lr5e.log\r\n`\r\n\r\n\r\n\u8bf7\u95ee\u5982\u4f55\u624d\u80fd\u987a\u5229\u52a0\u8f7d\u6240\u6709\u6743\u91cd\u4e0e\u72b6\u6001\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '13eb365eb768f30d46967dd5ba302ab1106a96b6', 'files': [{'path': 'src/llmtuner/tuner/sft/workflow.py', 'Loc': {\"(None, 'run_sft', 19)\": {'mod': [67]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/tuner/sft/workflow.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "5377d0bf95f2fc79b75b253e956a7945f3030ad3", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/908", "iss_label": "solved", "title": "\u8bc4\u4f30\u6307\u6807\u9664\u4e86BLEU \u5206\u6570\u548c\u6c49\u8bed ROUGE \u5206\u6570\u8fd8\u80fd\u4f7f\u7528\u5176\u4ed6\u7684\u8bc4\u4f30\u6307\u6807\u5417\uff1f", "body": "\u6211\u60f3\u628a\u6a21\u578b\u7528\u4e8e\u610f\u56fe\u8bcd\u69fd\u7684\u63d0\u53d6\uff0c\u4e00\u822c\u8fd9\u4e2a\u4efb\u52a1\u7684\u8bc4\u4ef7\u6307\u6807\u662f\u51c6\u786e\u7387\u548cF1 score\u7b49\uff0c\u8bf7\u95ee\u5728\u8fd9\u4e2a\u9879\u76ee\u91cc\u80fd\u4f7f\u7528\u51c6\u786e\u7387\u548cF1 score\u4f5c\u4e3a\u8bc4\u4ef7\u6307\u6807\u5417\uff1f\u5e94\u8be5\u600e\u4e48\u505a\u5462\uff1f\u8c22\u8c22\u5927\u4f6c\u89e3\u7b54~", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5377d0bf95f2fc79b75b253e956a7945f3030ad3', 'files': [{'path': 'src/llmtuner/tuner/sft/metric.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/tuner/sft/metric.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "93809d1c3b73898a89cbdd99061eeeed5fd4f6a7", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1120", "iss_label": "solved", "title": "\u7cfb\u7edf\u63d0\u793a\u8bcd", "body": "\u60f3\u8bf7\u6559\u4e0b\u5927\u4f6c\uff0c\u201c\u7cfb\u7edf\u63d0\u793a\u8bcd\uff08\u975e\u5fc5\u586b\uff09\u201c\u6846\u4f20\u5165\u7684\u5185\u5bb9\u600e\u4e48\u8f93\u5165\u7ed9\u6a21\u578b\u7684\uff0c\u600e\u4e48\u548c\u201d\u8f93\u5165\u3002\u3002\u201c\u6846\u4f20\u5165\u7684\u5185\u5bb9\u62fc\u63a5\u7684\uff1f\u5bf9\u5e94\u7684\u4ee3\u7801\u5728\u54ea\u91cc\uff1f\r\n\r\n\u611f\u8c22\u611f\u8c22", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '93809d1c3b73898a89cbdd99061eeeed5fd4f6a7', 'files': [{'path': 'src/llmtuner/extras/template.py', 'Loc': {\"('Template', '_encode', 93)\": {'mod': [109]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/extras/template.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "757564caa1a0e83d184100604e43efe3c5030c0e", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/2584", "iss_label": "solved", "title": "\u8bf7\u6559llama pro\u5e94\u8be5\u600e\u4e48\u7528\uff1f\u662f\u53ef\u4ee5\u7528\u6765\u5fae\u8c03\u5417\uff1f", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### Reproduction\n\n\u8bf7\u6559llama pro\u5e94\u8be5\u600e\u4e48\u7528\uff1f\u662f\u53ef\u4ee5\u7528\u6765\u505apt\u548cSFT\u5417\uff1f\n\n### Expected behavior\n\n_No response_\n\n### System Info\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '757564caa1a0e83d184100604e43efe3c5030c0e', 'files': [{'path': 'tests/llama_pro.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tests/llama_pro.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "e678c1ccb2583e7b3e9e5bf68b58affc1a71411c", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/5011", "iss_label": "solved", "title": "Compute_Accuracy", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\n![image](https://github.com/user-attachments/assets/4847743e-e25b-4136-a3f4-43a3e7335f80)\r\n\r\nI'm curious about this metrics for and how could i use this? and when? ( ComputeAccuracy )\r\n\r\n![image](https://github.com/user-attachments/assets/672f14bb-c812-45fe-ad77-d3c66f660ce5)\r\nand I saw llama-factory paper's metrics ( multi-choice ) and I wonder if this metrics are match with ComputeAccuracy\r\n\r\nanyone can answer me ?\r\n\r\nplease tell me how can i use this metrics, give me some example commands \r\n\r\nthank you! \n\n### Reproduction\n\n. \n\n### Expected behavior\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e678c1ccb2583e7b3e9e5bf68b58affc1a71411c', 'files': [{'path': 'examples/train_lora/llama3_lora_eval.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "\u914d\u7f6e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["examples/train_lora/llama3_lora_eval.yaml"], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "955e01c038ccc708def77f392b0e342f2f51dc9b", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4787", "iss_label": "solved", "title": "\u5168\u91cf\u5fae\u8c03BaiChuan2-7B-Chat\u7684yaml\u6587\u4ef6\u4e2d\u5982\u4f55\u4fee\u6539\u8d85\u53c2\u6570\u80fd\u5728\u4e09\u5f20A6000\u4e0a\u8fd0\u884c", "body": "### Reminder\r\n\r\n- [X] I have read the README and searched the existing issues.\r\n\r\n### System Info\r\n\r\n- `llamafactory` version: 0.8.2.dev0\r\n- Platform: Linux-3.10.0-1160.88.1.el7.x86_64-x86_64-with-glibc2.17\r\n- Python version: 3.8.19\r\n- PyTorch version: 2.3.0+cu121 (GPU)\r\n- Transformers version: 4.41.2\r\n- Datasets version: 2.20.0\r\n- Accelerate version: 0.31.0\r\n- PEFT version: 0.11.1\r\n- TRL version: 0.9.4\r\n- GPU type: NVIDIA RTX A6000\r\n- DeepSpeed version: 0.14.0\r\n- vLLM version: 0.4.3\r\n\r\n### Reproduction\r\n```yaml\r\n### model\r\nmodel_name_or_path: /data/Baichuan2-7B-Chat\r\n\r\n### method\r\nstage: sft\r\ndo_train: true\r\nfinetuning_type: full\r\n\r\n### ddp\r\nddp_timeout: 180000000\r\ndeepspeed: examples/deepspeed/ds_z3_config.json\r\n\r\n### dataset\r\ndataset: entity\r\ntemplate: baichuan2\r\ncutoff_len: 1024\r\nmax_samples: 1000\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/baichuan2-7b/full/sft\r\nlogging_steps: 10\r\nsave_steps: 500\r\nplot_loss: true\r\noverwrite_output_dir: true\r\n\r\n### train\r\nper_device_train_batch_size: 1\r\ngradient_accumulation_steps: 2\r\nlearning_rate: 1.0e-4\r\nnum_train_epochs: 3.0\r\nlr_scheduler_type: cosine\r\nwarmup_ratio: 0.1\r\npure_bf16: true\r\n\r\n### eval\r\nval_size: 0.1\r\nper_device_eval_batch_size: 1\r\neval_strategy: steps\r\neval_steps: 500\r\n```\r\n### Expected behavior\r\n\r\n\u60a8\u7684\u9879\u76ee\u4e2d\u7ed9\u51fa7B\u6a21\u578b\u80fd\u5728120G\u7684\u663e\u5b58\u4e0a\u8fd0\u884c\uff0c\u73b0\u5728\u6211\u57283\u5f20A6000\u4e0a\u8fd0\u884c\u4f1a\u51fa\u73b0OOM\uff0c\u5e0c\u671b\u60a8\u80fd\u544a\u8bc9\u6211\u600e\u4e48\u4fee\u6539\u8d85\u53c2\u6570\u80fd\u8ba9\u5b83\u8dd1\u8d77\u6765\u3002\u6211\u4e5f\u53c2\u8003\u4e86\u4e4b\u524d\u7684issue\uff0c\u8bbe\u7f6e\u4e86pure_bf16\uff0c\u4ecd\u7136\u4e0d\u80fd\u8fd0\u884c\u3002\r\n\r\n### Others\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '955e01c038ccc708def77f392b0e342f2f51dc9b', 'files': [{'path': 'examples/deepspeed/ds_z3_offload_config.json', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/deepspeed/ds_z3_offload_config.json"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "955e01c038ccc708def77f392b0e342f2f51dc9b", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4803", "iss_label": "solved", "title": "predict_oom", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\nmodel_name_or_path: llm/Qwen2-72B-Instruct\r\n# adapter_name_or_path: saves/qwen2_7b_errata_0705/lora_ace04_instruction_v1_savesteps_10/sft\r\n\r\n### method\r\nstage: sft\r\ndo_predict: true\r\nfinetuning_type: lora\r\n\r\n### dataset\r\ndataset: prompt_to_get_cot_normal\r\ntemplate: qwen\r\ncutoff_len: 2048\r\nmax_samples: 1000\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/qwen2_72b_errata_0712/lora/predict\r\noverwrite_output_dir: true\r\n\r\n### eval\r\nper_device_eval_batch_size: 1\r\npredict_with_generate: true\r\nddp_timeout: 180000000\n\n### Reproduction\n\n8\u5361A100 80G \u5728 72b \u7684\u57fa\u5ea7 predict 1k\u7684\u6570\u636e\u663e\u793aoom, \u6240\u6709\u7684\u663e\u5361\u540c\u65f6\u52a0\u8f7d\u6574\u4e2a\u6a21\u578b\u53c2\u6570, \u5bfc\u81f4oom\r\n\u636e\u5b98\u65b9 160G \u5373\u53ef, \u6211\u8fd980*8 \u90fd\u4e0d\u591f, \u8bf7\u95ee\u662fbug\u8fd8\u662f\u9700\u8981\u8bbe\u7f6e\u4ec0\u4e48\u53c2\u6570;\n\n### Expected behavior\n\n_No response_\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '955e01c038ccc708def77f392b0e342f2f51dc9b', 'files': [{'path': 'Examples/train_lora/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3\n\u7528\u6237\u914d\u7f6e\u9519\u8bef", "loc_way": "comment", "loc_scope": "3", "info_type": "config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Examples/train_lora/"]}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "3f11ab800f7dcf4b61a7c72ead4e051db11a8091", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/4178", "iss_label": "solved", "title": "glm-4-9b-chat-1m do_predict\u5f97\u5230\u7684generated_predictions.jsonl\u4e2d\u7684label\u51fa\u73b0\u4e86\\n\u548c\u4e00\u4e9b\u975e\u6570\u636e\u96c6\u4e2d\u7684\u7ed3\u679c\u3002", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### System Info\n\nllamafactory 0.7.2.dev0\r\nPython 3.10.14\r\nubuntu 20.04\n\n### Reproduction\n\n$llamafactory-cli train glm_predict.yaml\r\n\r\ngenerated_predictions.jsonl \u8f93\u51fa\r\n{\"label\": \"\\n[S,137.0]\", \"predict\": \"\\n[S,137.0]\"}\r\n{\"label\": \"\\n\", \"predict\": \"\\n\"}\r\n{\"label\": \"\\n\", \"predict\": \"\\n\"}\r\n{\"label\": \"\\n[S,593\", \"predict\": \"\\n[S,593\"}\r\n{\"label\": \"\\n[H,593\", \"predict\": \"\\n[S,593\"}\r\n\r\nglm_predict.yaml \u5185\u5bb9\r\n### model\r\nmodel_name_or_path: ./THUDM_glm-4-9b-chat-1m\r\nadapter_name_or_path: saves/glm/lora/sft\r\n\r\n### method\r\nstage: sft\r\ndo_predict: true\r\nfinetuning_type: lora\r\n\r\n### dataset\r\ndataset: data_v0.1\r\ntemplate: glm4\r\noverwrite_cache: true\r\npreprocessing_num_workers: 16\r\n\r\n### output\r\noutput_dir: saves/glm/lora/predict\r\n\r\n### eval\r\nper_device_eval_batch_size: 4\r\npredict_with_generate: true\r\n\r\n\r\n\r\n\n\n### Expected behavior\n\n\u671f\u671b\u8f93\u51fa\r\ngenerated_predictions.jsonl \u8f93\u51fa\r\n{\"label\": \"[S,137.0]\", \"predict\": \"[S,137.0]\"}\r\n{\"label\": \"[S,593]\", \"predict\": \"[S,593]\"}\r\n{\"label\": \"[H,593]\", \"predict\": \"[S,593]\"}\r\n\r\n\r\n\u53ea\u5305\u542b\"\\n\"\u7684\u7ed3\u679c\u90fd\u6ca1\u6709\u51fa\u73b0\u5728\u6570\u636e\u96c6\u4e2d\u3002\n\n### Others\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '3f11ab800f7dcf4b61a7c72ead4e051db11a8091', 'files': [{'path': 'src/llamafactory/data/template.py', 'Loc': {'(None, None, None)': {'mod': [663, 664]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llamafactory/data/template.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "d46c136c0e104c50999df18a88c42658b819f71f", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/230", "iss_label": "solved", "title": "\u4f7f\u7528\u672c\u9879\u76ee\u8bad\u7ec3baichuan-13b\u4e4b\u540e\uff0c\u5982\u4f55\u5728baichuan-13b\u4e2d\u52a0\u8f7d\u8bad\u7ec3\u5b8c\u7684\u6a21\u578b", "body": "\u8bad\u7ec3\u5b8c\u6210\u540e\u5982\u4f55\u5e94\u8be5\u5982\u4f55\u5728baichuan-13b\u7684\u9879\u76ee\u4e2d\u4fee\u6539\u52a0\u8f7d\u8bad\u7ec3\u5b8c\u6210\u540e\u7684\u6a21\u578b\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd46c136c0e104c50999df18a88c42658b819f71f', 'files': [{'path': 'src/export_model.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/export_model.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "024b0b1ab28d3c3816f319370ed79a4f26d40edf", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/1995", "iss_label": "solved", "title": "Phi-1.5\u8dd1RM lora \u51fa\u73b0'NoneType' object is not subcriptable", "body": "### Reminder\n\n- [X] I have read the README and searched the existing issues.\n\n### Reproduction\n\n\r\n\r\nsh\u811a\u672c\uff1a\r\n```\r\ndeepspeed --num_gpus 8 --master_port=9901 src/train_bash.py \\\r\n --stage rm \\\r\n --model_name_or_path Phi-1.5 \\\r\n --deepspeed ds_config.json \\ \r\n --adapter_name_or_path sft_lora \\ \r\n --create_new_adapter \\\r\n --do_train \\ \r\n --dataset comparision_gpt4_en \\\r\n --template default \\\r\n --finetuning_type lora \\\r\n --lora_target Wqkv \\ \r\n --overwrite_ouput_dir \\ \r\n --output_dir rm_lora \\ \r\n --per_device_train_batch_size 2 \\\r\n --gradient_accumulation_steps 4 \\\r\n --lr_scheduler_type cosine \\ \r\n --logging_steps 1 \\\r\n --save_steps 200 \\ \r\n --learning_rate 1e-6 \\ \r\n --num_train_epochs 1.0 \\ \r\n --max_steps 200 \\\r\n --fp16 > rm.log 2>&1 &\r\nwait\r\n \r\n```\n\n### Expected behavior\n\n\u671f\u671b\u7ed3\u679c\uff1a\u6210\u529f\u8bfb\u53d6\u6743\u91cd\u5e76\u8fdb\u5165\u8bad\u7ec3\n\n### System Info\n\n\u8bbe\u5907\uff1aNPU\r\n\u5305\u7248\u672c\uff1a\r\n```\r\ntransformers==4.36.1\r\ndeepspeed==0.12.4\r\npeft==0.7.1\r\ntrl==0.7.4\r\ntorch==2.1.0\r\naccelerate==0.25.0\r\n```\n\n### Others\n\n\u62a5\u9519\u4fe1\u606f\uff1a\r\nTraceback\r\n File \"src/train_bash.py\" line 14\r\n main()\r\nFile \"src/train_bash.py\" line 5\r\n run_exp()\r\nFile \"LLaMA-Factory/src/llmtuner/train/tuner.py\", line 28 in run_exp\r\n run_rm(model_args, data_args, training_args, finetuning_args, callbacks)\r\nFile \"LLaMA-Factory/src/llmtuner/train/rm/workflow.py\", line 50, in run_rm\r\n train_result = trainer.train(resume_from_checkpoint=training_args.resume_from_checkpoint)\r\nFile \"transformers/trainer.py\" line 2728\r\n loss = self.compute_loss(model, inputs)\r\nFile \"LLaMA-Factory/src/llmtuner/train/rm/trainer.py\" in line 41, in compute_loss\r\n _, _, values = model(**inputs, output_hidden_states=True, return_dict=True)\r\nFile \".../trl/models/modeling_value_head.py\", in line 175. in forward\r\n last_hidden_state = base_model_output.hidden_state[-1]\r\nTypeError: 'NoneType' object is not subscriptable\r\n\r\n\u6211\u4e00\u5f00\u59cb\u6000\u7591\u662f\u6743\u91cd\u95ee\u9898\uff0c\u91cd\u65b0\u4e0b\u8f7d\u4e86\u6743\u91cd\u4f9d\u7136\u62a5\u8be5\u9519\u8bef\uff0c\u5c1d\u8bd5\u5c06Phi-1.5\u6362\u6210Phi-2\u540c\u6837\u62a5\u9519\r\n\r\n \r\n \r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "docs", "pro": "transformers", "path": ["model_doc/phi"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["model_doc/phi"], "test": [], "config": [], "asset": []}} +{"organization": "hiyouga", "repo_name": "LLaMA-Factory", "base_commit": "d46c136c0e104c50999df18a88c42658b819f71f", "iss_html_url": "https://github.com/hiyouga/LLaMA-Factory/issues/226", "iss_label": "solved", "title": "\u8bf7\u95ee\u9879\u76ee\u4e2d\u5bf9\u591a\u8f6e\u5bf9\u8bdd\u8bed\u6599\u7684\u5904\u7406\u65b9\u5f0f", "body": "\u662f\u7528\u591a\u4e2a\u5386\u53f2\u5bf9\u8bdd\u62fc\u63a5\u540e\u4f5c\u4e3ainput\u6765\u9884\u6d4b\u6700\u540e\u4e00\u8f6e\u7684\u56de\u7b54\u5417\uff1f\u8fd8\u662f\u628a\u5386\u53f2\u5bf9\u8bdd\u62c6\u5206\u6210\u591a\u4e2a\u8f6e\u6b21\u7684\u8bad\u7ec3\u8bed\u6599\u6bd4\u59825\u8f6e\u6b21\u5bf9\u8bdd\u53ef\u4ee5\u62c6\u5206\u62101 2 3 4 5\u8f6e\u6b21\u5bf9\u8bdd\u6837\u672c\u3002\u5173\u4e8e\u5177\u4f53\u7684\u5904\u7406\u8fc7\u7a0b\u4ee3\u7801 \u80fd\u5426\u8bf7\u4f5c\u8005\u6307\u51fa\u4e00\u4e0b \u6211\u60f3\u5b66\u4e60\u5b66\u4e60\u3002\u8c22\u8c22\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd46c136c0e104c50999df18a88c42658b819f71f', 'files': [{'path': 'src/llmtuner/dsets/preprocess.py', 'Loc': {\"(None, 'preprocess_supervised_dataset', 50)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/llmtuner/dsets/preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "2f3aab9cfdc139f399387dbb90300d5a8bf8d2f1", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/375", "iss_label": "bug", "title": "ValueError: Requested tokens exceed context window of 1000", "body": "After I ingest a file, run privateGPT and try to ask anything, I get following error:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Stable_Diffusion\\privateGPT\\privateGPT.py\", line 75, in <module>\r\n main()\r\n File \"C:\\Stable_Diffusion\\privateGPT\\privateGPT.py\", line 47, in main\r\n res = qa(query)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\retrieval_qa\\base.py\", line 120, in _call\r\n answer = self.combine_documents_chain.run(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 239, in run\r\n return self(kwargs, callbacks=callbacks)[self.output_keys[0]]\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\combine_documents\\base.py\", line 84, in _call\r\n output, extra_return_dict = self.combine_docs(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\combine_documents\\stuff.py\", line 87, in combine_docs\r\n return self.llm_chain.predict(callbacks=callbacks, **inputs), {}\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 213, in predict\r\n return self(kwargs, callbacks=callbacks)[self.output_key]\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 140, in __call__\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\base.py\", line 134, in __call__\r\n self._call(inputs, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 69, in _call\r\n response = self.generate([inputs], run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\chains\\llm.py\", line 79, in generate\r\n return self.llm.generate_prompt(\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 134, in generate_prompt\r\n return self.generate(prompt_strings, stop=stop, callbacks=callbacks)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 191, in generate\r\n raise e\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 185, in generate\r\n self._generate(prompts, stop=stop, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\base.py\", line 405, in _generate\r\n self._call(prompt, stop=stop, run_manager=run_manager)\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\llamacpp.py\", line 225, in _call\r\n for token in self.stream(prompt=prompt, stop=stop, run_manager=run_manager):\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\langchain\\llms\\llamacpp.py\", line 274, in stream\r\n for chunk in result:\r\n File \"C:\\Users\\olegt\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\llama_cpp\\llama.py\", line 618, in _create_completion\r\n raise ValueError(\r\nValueError: Requested tokens exceed context window of 1000\r\n```\r\n\r\nI tried it with docx and pdf, used models are ggml-vic13b-q5_1.bin and stable-vicuna-13B.ggml.q4_0.bin.\r\nDuring ingestion or loading privateGPT I get no error.\r\n\r\nOS: Windows 10\r\nCPU: Ryzen 7 3700\r\nRAM: 32gb\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2f3aab9cfdc139f399387dbb90300d5a8bf8d2f1', 'files': [{'path': 'ingest.py', 'Loc': {\"(None, 'process_documents', 114)\": {'mod': [124]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nor\n1", "info_type": "Code"}, "loctype": {"code": ["ingest.py"], "doc": [], "test": [], "config": [".env"], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "24cfddd60f74aadd2dade4c63f6012a2489938a1", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1125", "iss_label": "", "title": "LLM mock donot gives any output", "body": "i have downloaded the llm models \r\n\r\nused this \r\npoetry install --with local\r\npoetry run python scripts/setup\r\n\r\nstill i get this output \r\n![Screenshot from 2023-10-27 23-50-34](https://github.com/imartinez/privateGPT/assets/148402457/201b18f9-c269-40e4-99c5-a22fd3b9366d)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '24cfddd60f74aadd2dade4c63f6012a2489938a1', 'files': [{'path': 'settings.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["settings.yaml"], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "dd1100202881a01b6b013b7bc1faad8b5c63fec9", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/850", "iss_label": "primordial", "title": "privateGPT\u4e2d\u6587\u63d0\u95ee\u663e\u793atoken\u8d85\u51fa\u9650\u5236\uff0c\u82f1\u6587\u63d0\u95ee\u4e0d\u5b58\u5728\u8fd9\u4e2a\u95ee\u9898", "body": "\r\ntoken\u7684\u8ba1\u7b97\u65b9\u5f0f\u5f88\u5947\u602a\u4e94\u4e2a\u5b57\u6307\u4ee4\u7684token\u6bd4\u4e03\u4e2a\u5b57\u591a\r\n![\u5fae\u4fe1\u56fe\u7247_20230713094810](https://github.com/imartinez/privateGPT/assets/139415035/6346ae1f-9c65-4721-b7dd-a176fc9be4e1)\r\n![\u5fae\u4fe1\u56fe\u7247_20230713094822](https://github.com/imartinez/privateGPT/assets/139415035/60f2d272-8a80-48d7-9032-4d915a83aa7d)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "4", "loc_way": "comment", "loc_scope": "1", "info_type": "config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "d17c34e81a84518086b93605b15032e2482377f7", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1724", "iss_label": "", "title": "Error in Model Download and Tokenizer Fetching During Setup Script Execution", "body": "### Environment\r\nOperating System: Macbook Pro M1\r\nPython Version: 3.11\r\n\r\nDescription\r\nI'm encountering an issue when running the setup script for my project. The script is supposed to download an embedding model and an LLM model from Hugging Face, followed by their respective tokenizers. While the script successfully downloads the embedding and LLM models, it fails when attempting to download the tokenizer with a 404 Client Error.\r\n\r\n### Steps to Reproduce\r\nRun `poetry run python scripts/setup`\r\nEmbedding model (BAAI/bge-small-en-v1.5) and the LLM model (mistral-7b-instruct-v0.2.Q4_K_M.gguf) are downloaded successfully.\r\nThe script then attempts to download a tokenizer and fails.\r\n\r\n### Actual Behavior\r\nThe script throws an error when trying to download the tokenizer. The error message indicates a 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json. This suggests that either the tokenizer's name is not being correctly passed (as indicated by the 'None' in the URL) or there's an issue with the tokenizer's availability on Hugging Face.\r\n\r\n### Logs\r\n```bash\r\n22:02:47.207 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default']\r\nDownloading embedding BAAI/bge-small-en-v1.5\r\nFetching 14 files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 14/14 [00:00<00:00, 14.81it/s]\r\nEmbedding model downloaded!\r\nDownloading LLM mistral-7b-instruct-v0.2.Q4_K_M.gguf\r\nLLM model downloaded!\r\nDownloading tokenizer None\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 270, in hf_raise_for_status\r\n response.raise_for_status()\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/requests/models.py\", line 1021, in raise_for_status\r\n raise HTTPError(http_error_msg, response=self)\r\nrequests.exceptions.HTTPError: 404 Client Error: Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/utils/hub.py\", line 398, in cached_file\r\n resolved_file = hf_hub_download(\r\n ^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1374, in hf_hub_download\r\n raise head_call_error\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1247, in hf_hub_download\r\n metadata = get_hf_file_metadata(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py\", line 118, in _inner_fn\r\n return fn(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 1624, in get_hf_file_metadata\r\n r = _request_wrapper(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 402, in _request_wrapper\r\n response = _request_wrapper(\r\n ^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/file_download.py\", line 426, in _request_wrapper\r\n hf_raise_for_status(response)\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/huggingface_hub/utils/_errors.py\", line 320, in hf_raise_for_status\r\n raise RepositoryNotFoundError(message, response) from e\r\nhuggingface_hub.utils._errors.RepositoryNotFoundError: 404 Client Error. (Request ID: Root=1-65f21479-09e39977255bdb72502d4b8c;66371627-2d02-44c6-8f25-d115820c1986)\r\n\r\nRepository Not Found for url: https://huggingface.co/None/resolve/main/tokenizer_config.json.\r\nPlease make sure you specified the correct `repo_id` and `repo_type`.\r\nIf you are trying to access a private or gated repo, make sure you are authenticated.\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/Users/User/Projects/Tests/privateGPT/scripts/setup\", line 43, in <module>\r\n AutoTokenizer.from_pretrained(\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 767, in from_pretrained\r\n tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 600, in get_tokenizer_config\r\n resolved_config_file = cached_file(\r\n ^^^^^^^^^^^^\r\n File \"/Users/User/Projects/Tests/privateGPT/.venv/lib/python3.11/site-packages/transformers/utils/hub.py\", line 421, in cached_file\r\n raise EnvironmentError(\r\nOSError: None is not a local folder and is not a valid model identifier listed on 'https://huggingface.co/models'\r\nIf this is a private repository, make sure to pass a token having permission to this repo either by logging in with `huggingface-cli login` or by passing `token=<your_token>`\r\n```\r\n### Additional Information\r\nIt seems like the script is not correctly fetching the name or identifier for the tokenizer.\r\nThe issue might be related to how the tokenizer's name is being resolved or passed in the script (None).\r\nI also tried with docker compose, yielding same results. Maybe it is just some setting that I am missing from the docs?\r\n\r\nThank you\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd17c34e81a84518086b93605b15032e2482377f7', 'files': [{'path': 'settings.yaml', 'Loc': {'(None, None, 42)': {'mod': [42]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["settings.yaml"], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "026b9f895cfb727da523a20c59773146801236ba", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/13", "iss_label": "", "title": "gpt_tokenize: unknown token '?'", "body": "gpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\ngpt_tokenize: unknown token '?'\r\n[1] 32658 killed python3 privateGPT.py", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3\n\uff1f\uff1f\uff1f", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "d3acd85fe34030f8cfd7daf50b30c534087bdf2b", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1514", "iss_label": "", "title": "LLM Chat only returns \"#\" characters", "body": "No matter the prompt, privateGPT only returns hashes as the response. This doesn't occur when not using CUBLAS. \r\n\r\n<img width=\"745\" alt=\"image\" src=\"https://github.com/imartinez/privateGPT/assets/6668593/b4ef137f-0122-44fe-864a-eef246066ec3\">\r\n\r\nSet up info:\r\n\r\nNVIDIA GeForce RTX 4080\r\nWindows 11\r\n\r\n<img width=\"924\" alt=\"image\" src=\"https://github.com/imartinez/privateGPT/assets/6668593/35c2233d-ae28-40c9-a018-d9590f85908d\">\r\n\r\n<img width=\"1141\" alt=\"image\" src=\"https://github.com/imartinez/privateGPT/assets/6668593/750e4006-cd97-4b4f-848d-5598f09697f3\">\r\n\r\n\r\n\r\naccelerate==0.25.0\r\naiofiles==23.2.1\r\naiohttp==3.9.1\r\naiosignal==1.3.1\r\naiostream==0.5.2\r\naltair==5.2.0\r\nannotated-types==0.6.0\r\nanyio==3.7.1\r\nattrs==23.1.0\r\nbeautifulsoup4==4.12.2\r\nblack==22.12.0\r\nboto3==1.34.2\r\nbotocore==1.34.2\r\nbuild==1.0.3\r\nCacheControl==0.13.1\r\ncertifi==2023.11.17\r\ncfgv==3.4.0\r\ncharset-normalizer==3.3.2\r\ncleo==2.1.0\r\nclick==8.1.7\r\ncolorama==0.4.6\r\ncoloredlogs==15.0.1\r\ncontourpy==1.2.0\r\ncoverage==7.3.3\r\ncrashtest==0.4.1\r\ncycler==0.12.1\r\ndataclasses-json==0.5.14\r\ndatasets==2.14.4\r\nDeprecated==1.2.14\r\ndill==0.3.7\r\ndiskcache==5.6.3\r\ndistlib==0.3.8\r\ndistro==1.8.0\r\ndnspython==2.4.2\r\ndulwich==0.21.7\r\nemail-validator==2.1.0.post1\r\nevaluate==0.4.1\r\nfastapi==0.103.2\r\nfastjsonschema==2.19.1\r\nffmpy==0.3.1\r\nfilelock==3.13.1\r\nflatbuffers==23.5.26\r\nfonttools==4.46.0\r\nfrozenlist==1.4.1\r\nfsspec==2023.12.2\r\ngradio==4.10.0\r\ngradio_client==0.7.3\r\ngreenlet==3.0.2\r\ngrpcio==1.60.0\r\ngrpcio-tools==1.60.0\r\nh11==0.14.0\r\nh2==4.1.0\r\nhpack==4.0.0\r\nhttpcore==1.0.2\r\nhttptools==0.6.1\r\nhttpx==0.25.2\r\nhuggingface-hub==0.19.4\r\nhumanfriendly==10.0\r\nhyperframe==6.0.1\r\nidentify==2.5.33\r\nidna==3.6\r\nimportlib-resources==6.1.1\r\niniconfig==2.0.0\r\ninjector==0.21.0\r\ninstaller==0.7.0\r\nitsdangerous==2.1.2\r\njaraco.classes==3.3.0\r\nJinja2==3.1.2\r\njmespath==1.0.1\r\njoblib==1.3.2\r\njsonschema==4.20.0\r\njsonschema-specifications==2023.11.2\r\nkeyring==24.3.0\r\nkiwisolver==1.4.5\r\nllama-index==0.9.3\r\nllama_cpp_python==0.2.29\r\nmarkdown-it-py==3.0.0\r\nMarkupSafe==2.1.3\r\nmarshmallow==3.20.1\r\nmatplotlib==3.8.2\r\nmdurl==0.1.2\r\nmore-itertools==10.2.0\r\nmpmath==1.3.0\r\nmsgpack==1.0.7\r\nmultidict==6.0.4\r\nmultiprocess==0.70.15\r\nmypy==1.7.1\r\nmypy-extensions==1.0.0\r\nnest-asyncio==1.5.8\r\nnetworkx==3.2.1\r\nnltk==3.8.1\r\nnodeenv==1.8.0\r\nnumpy==1.26.3\r\nonnx==1.15.0\r\nonnxruntime==1.16.3\r\nopenai==1.5.0\r\noptimum==1.16.1\r\norjson==3.9.10\r\npackaging==23.2\r\npandas==2.1.4\r\npathspec==0.12.1\r\npexpect==4.9.0\r\nPillow==10.1.0\r\npkginfo==1.9.6\r\nplatformdirs==4.1.0\r\npluggy==1.3.0\r\npoetry==1.7.1\r\npoetry-core==1.8.1\r\npoetry-plugin-export==1.6.0\r\nportalocker==2.8.2\r\npre-commit==2.21.0\r\n-e git+https://github.com/imartinez/privateGPT@d3acd85fe34030f8cfd7daf50b30c534087bdf2b#egg=private_gpt\r\nprotobuf==4.25.1\r\npsutil==5.9.6\r\nptyprocess==0.7.0\r\npyarrow==14.0.1\r\npydantic==2.5.2\r\npydantic-extra-types==2.2.0\r\npydantic-settings==2.1.0\r\npydantic_core==2.14.5\r\npydub==0.25.1\r\nPygments==2.17.2\r\npyparsing==3.1.1\r\npypdf==3.17.2\r\npyproject_hooks==1.0.0\r\npyreadline3==3.4.1\r\npytest==7.4.3\r\npytest-asyncio==0.21.1\r\npytest-cov==3.0.0\r\npython-dateutil==2.8.2\r\npython-dotenv==1.0.0\r\npython-multipart==0.0.6\r\npytz==2023.3.post1\r\npywin32==306\r\npywin32-ctypes==0.2.2\r\nPyYAML==6.0.1\r\nqdrant-client==1.7.0\r\nrapidfuzz==3.6.1\r\nreferencing==0.32.0\r\nregex==2023.10.3\r\nrequests==2.31.0\r\nrequests-toolbelt==1.0.0\r\nresponses==0.18.0\r\nrich==13.7.0\r\nrpds-py==0.14.1\r\nruff==0.1.8\r\ns3transfer==0.9.0\r\nsafetensors==0.4.1\r\nscikit-learn==1.3.2\r\nscipy==1.11.4\r\nsemantic-version==2.10.0\r\nsentence-transformers==2.2.2\r\nsentencepiece==0.1.99\r\nshellingham==1.5.4\r\nsix==1.16.0\r\nsniffio==1.3.0\r\nsoupsieve==2.5\r\nSQLAlchemy==2.0.23\r\nstarlette==0.27.0\r\nsympy==1.12\r\ntenacity==8.2.3\r\nthreadpoolctl==3.2.0\r\ntiktoken==0.5.2\r\ntokenizers==0.15.0\r\ntomlkit==0.12.0\r\ntoolz==0.12.0\r\ntorch==2.1.2+cu121\r\ntorchaudio==2.1.2+cu121\r\ntorchvision==0.16.2+cu121\r\ntqdm==4.66.1\r\ntransformers==4.36.1\r\ntrove-classifiers==2024.1.8\r\ntyper==0.9.0\r\ntypes-PyYAML==6.0.12.12\r\ntyping-inspect==0.9.0\r\ntyping_extensions==4.9.0\r\ntzdata==2023.3\r\nujson==5.9.0\r\nurllib3==1.26.18\r\nuvicorn==0.24.0.post1\r\nvirtualenv==20.25.0\r\nwatchdog==3.0.0\r\nwatchfiles==0.21.0\r\nwebsockets==11.0.3\r\nwrapt==1.16.0\r\nxxhash==3.4.1\r\nyarl==1.9.4", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd3acd85fe34030f8cfd7daf50b30c534087bdf2b', 'files': [{'path': 'private_gpt/components/llm/llm_component.py', 'Loc': {\"('LLMComponent', '__init__', 21)\": {'mod': [45]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["private_gpt/components/llm/llm_component.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "c4b247d696c727c1da6d993ce4f6c3a557e91b42", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/685", "iss_label": "enhancement\nprimordial", "title": "CPU utilization", "body": "CPU utilization appears to be capped at 20%\r\nIs there a way to increase CPU utilization and thereby enhance performance?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c4b247d696c727c1da6d993ce4f6c3a557e91b42', 'files': [{'path': 'privateGPT.py', 'Loc': {\"(None, 'main', 23)\": {'mod': [36]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["privateGPT.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "7ae80e662936bd946a231d1327bde476556c5d61", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/181", "iss_label": "primordial", "title": "Segfault : not enough space in the context's memory pool", "body": "ggml_new_tensor_impl: not enough space in the context's memory pool (needed 3779301744, available 3745676000)\r\nzsh: segmentation fault python3.11 privateGPT.py\r\n\r\nWhats context memory pool? can i configure it? i actually have a lot of excess memory", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7ae80e662936bd946a231d1327bde476556c5d61', 'files': [{'path': 'ingest.py', 'Loc': {\"(None, 'main', 37)\": {'mod': [47]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ingest.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "9d47d03d183685c675070d47ad3beb67446d6580", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/630", "iss_label": "bug\nprimordial", "title": "Use falcon model in privategpt", "body": "Hi how can i use Falcon model in privategpt?\r\n\r\nhttps://huggingface.co/tiiuae/falcon-40b-instruct\r\n\r\nThanks", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9d47d03d183685c675070d47ad3beb67446d6580', 'files': [{'path': 'privateGPT.py', 'Loc': {\"(None, 'main', 23)\": {'mod': [32]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["privateGPT.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "380b119581d2afcd24948f1108507b138490aec6", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/235", "iss_label": "bug\nprimordial", "title": "Need help on in some errors", "body": " File \"F:\\privateGPT\\Lib\\site-packages\\langchain\\embeddings\\llamacpp.py\", line 79, in validate_environment\r\n values[\"client\"] = Llama(\r\n ^^^^^^\r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama.py\", line 155, in __init__ \r\n self.ctx = llama_cpp.llama_init_from_file(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n \r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama_cpp.py\", line 182, in llama_init_from_file\r\n return _lib.llama_init_from_file(path_model, params)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nOSError: [WinError -1073741795] Windows Error 0xc000001d\r\nDuring handling of the above exception, another exception occurred:\r\n\r\n File \"F:\\privateGPT\\ingest.py\", line 62, in <module>\r\n main()\r\n File \"F:\\privateGPT\\ingest.py\", line 53, in main\r\n llama = LlamaCppEmbeddings(model_path=llama_embeddings_model, n_ctx=model_n_ctx) \r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ \r\n File \"pydantic\\main.py\", line 339, in pydantic.main.BaseModel.__init__\r\n File \"pydantic\\main.py\", line 1102, in pydantic.main.validate_model\r\n File \"F:\\privateGPT\\Lib\\site-packages\\langchain\\embeddings\\llamacpp.py\", line 99, in validate_environment\r\n raise NameError(f\"Could not load Llama model from path: {model_path}\")\r\nNameError: Could not load Llama model from path: F:/privateGPT/models/ggml-model-q4_0.bin \r\nException ignored in: <function Llama.__del__ at 0x000002307F085E40>\r\nTraceback (most recent call last):\r\n File \"F:\\privateGPT\\Lib\\site-packages\\llama_cpp\\llama.py\", line 978, in __del__\r\n if self.ctx is not None:\r\n ^^^^\r\nAttributeError: 'Llama' object has no attribute 'ctx'\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '380b119581d2afcd24948f1108507b138490aec6', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "b1057afdf8f65fdb10e4160adbd8462be0c08271", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/796", "iss_label": "primordial", "title": "Unable to instantiate model (type=value_error)", "body": "Installed on Ubuntu 20.04 with Python3.11-venv\r\n\r\nError on line 38:\r\nhttps://github.com/imartinez/privateGPT/blob/b1057afdf8f65fdb10e4160adbd8462be0c08271/privateGPT.py#L38C7-L38C7\r\n\r\nError:\r\n\r\nUsing embedded DuckDB with persistence: data will be stored in: db\r\nFound model file at models/ggml-gpt4all-j-v1.3-groovy.bin\r\nInvalid model file\r\nTraceback (most recent call last):\r\n File \"/home/kk/Documents/privateGPT/privateGPT.py\", line 83, in <module>\r\n main()\r\n File \"/home/kk/Documents/privateGPT/privateGPT.py\", line 38, in main\r\n llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=False)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"pydantic/main.py\", line 341, in pydantic.main.BaseModel.__init__\r\npydantic.error_wrappers.ValidationError: 1 validation error for GPT4All\r\n__root__\r\n Unable to instantiate model (type=value_error)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ggml-gpt4all-j-v1.3-groovy.bin"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ggml-gpt4all-j-v1.3-groovy.bin"]}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "dd1100202881a01b6b013b7bc1faad8b5c63fec9", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/839", "iss_label": "bug\nprimordial", "title": "ERROR: The prompt size exceeds the context window size and cannot be processed.", "body": "Enter a query\uff0c\r\nIt show:\r\n\r\nERROR: The prompt size exceeds the context window size and cannot be processed.GPT-J ERROR: The prompt is2614tokens and the context window is2048!\r\n\r\nERROR: The prompt size exceeds the context window size and cannot be processed.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} +{"organization": "zylon-ai", "repo_name": "private-gpt", "base_commit": "6bbec79583b7f28d9bea4b39c099ebef149db843", "iss_html_url": "https://github.com/zylon-ai/private-gpt/issues/1598", "iss_label": "", "title": "Performance bottleneck using GPU ", "body": "Hi Guys, \r\n\r\nI am running the default Mistral model, and when running queries I am seeing 100% CPU usage (so single core), and up to 29% GPU usage which drops to have 15% mid answer. \r\n\r\nI am using a MacBook Pro with M3 Max. I have set: model_kwargs={\"n_gpu_layers\": -1, \"offload_kqv\": True},\r\n\r\nI am curious as LM studio runs the same model with low CPU usage and 80%+ GPU", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6bbec79583b7f28d9bea4b39c099ebef149db843', 'files': [{'path': 'private_gpt/ui/ui.py', 'Loc': {\"('PrivateGptUi', 'yield_deltas', 81)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["private_gpt/ui/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "87ebab0615b1bf9b14b478b055e7059d630b4833", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/6007", "iss_label": "question", "title": "How to limit YouTube Music search to tracks only?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\n- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nIs there a way to return only tracks in a YTmusic search? Sometimes music videos have sound effects, while I'm only interested in the original song.\r\n\r\nI'm using this command:\r\n`yt-dlp -f bestaudio --playlist-items 1 --default-search \"https://music.youtube.com/search?q=\" -a list-of-tracks.txt`\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '87ebab0615b1bf9b14b478b055e7059d630b4833', 'files': [{'path': 'yt_dlp/extractor/youtube.py', 'Loc': {\"('YoutubeMusicSearchURLIE', None, 6647)\": {'mod': [6676]}}, 'status': 'modified'}, {'path': 'yt_dlp/extractor/youtube.py', 'Loc': {\"('YoutubeMusicSearchURLIE', None, 6647)\": {'mod': [6659]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "91302ed349f34dc26cc1d661bb45a4b71f4417f7", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/7436", "iss_label": "question", "title": "Is YT-DLP capacity of downloading/displaying Automatic caption?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\n**Similar Issue:**\r\n- #5733 \r\n\r\n--- \r\nI may have missed one or 2 that could answer my question. These discussions and answers are not clear to my understanding, so here i am \r\n\r\n**Details**\r\nThere are many videos that have \"auto-generated subtitles | automatic captions\" and no non-generated subtitles. I've ran `yt-dlp --list-subs URL` and discover that it said `URL has no subtitles`. \r\n\r\n**QUESTION:**\r\n1. Is it possible for yt-dlp to display the automatic caption while I am streaming the video to MPV? \r\n2. Does yt-dlp preferred \"non auto-generated caption\"? \r\n\r\nI'm not sure if this is intentional or not due to one discussion via issues that a guy mentioned that yt-dlp preferred non-autogenerated subtitles. \r\n\r\n**Command for using MPV with yt-dlp**\r\nthe command was `mpv \"https://youtu.be/i6kccBc-FBQ\" --ytdl-raw-options=write-auto-subs=,write-subs=,sub-lang=en`\r\n\r\nEDIT: added the double quote to the URL in the command line\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\r\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n```shell\r\n[debug] Command-line config: ['-vU', '--list-subs', 'https://youtu.be/i6kccBc-FBQ']\r\n[debug] Portable config \"C:\\Program Scoop\\apps\\yt-dlp\\current\\yt-dlp.conf\": []\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2023.06.22 [812cdfa06] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1851 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nAvailable version: stable@2023.06.22, Current version: stable@2023.06.22\r\nCurrent Build Hash: 37e7ffe204309357cfd1388b0e2c782a30e293ddd0f2761a9a8f6afa185b3566\r\nyt-dlp is up to date (stable@2023.06.22)\r\n[youtube] Extracting URL: https://youtu.be/i6kccBc-FBQ\r\n[youtube] i6kccBc-FBQ: Downloading webpage\r\n[youtube] i6kccBc-FBQ: Downloading ios player API JSON\r\n[youtube] i6kccBc-FBQ: Downloading android player API JSON\r\n[debug] Loading youtube-nsig.b7910ca8 from cache\r\n[debug] [youtube] Decrypted nsig ftRL4j1AuTut8ZV => WMPfJf_eWd71gQ\r\n[youtube] i6kccBc-FBQ: Downloading m3u8 information\r\n[debug] Sort order given by extractor: quality, res, fps, hdr:12, source, vcodec:vp9.2, channels, acodec, lang, proto\r\n[debug] Formats sorted by: hasvid, ie_pref, quality, res, fps, hdr:12(7), source, vcodec:vp9.2(10), channels, acodec, lang, proto, size, br, asr, vext, aext, hasaud, id\r\n[info] Available automatic captions for i6kccBc-FBQ:\r\nLanguage Name Formats\r\naf Afrikaans vtt, ttml, srv3, srv2, srv1, json3\r\nak Akan vtt, ttml, srv3, srv2, srv1, json3\r\nsq Albanian vtt, ttml, srv3, srv2, srv1, json3\r\nam Amharic vtt, ttml, srv3, srv2, srv1, json3\r\nar Arabic vtt, ttml, srv3, srv2, srv1, json3\r\nhy Armenian vtt, ttml, srv3, srv2, srv1, json3\r\nas Assamese vtt, ttml, srv3, srv2, srv1, json3\r\nay Aymara vtt, ttml, srv3, srv2, srv1, json3\r\naz Azerbaijani vtt, ttml, srv3, srv2, srv1, json3\r\nbn Bangla vtt, ttml, srv3, srv2, srv1, json3\r\neu Basque vtt, ttml, srv3, srv2, srv1, json3\r\nbe Belarusian vtt, ttml, srv3, srv2, srv1, json3\r\nbho Bhojpuri vtt, ttml, srv3, srv2, srv1, json3\r\nbs Bosnian vtt, ttml, srv3, srv2, srv1, json3\r\nbg Bulgarian vtt, ttml, srv3, srv2, srv1, json3\r\nmy Burmese vtt, ttml, srv3, srv2, srv1, json3\r\nca Catalan vtt, ttml, srv3, srv2, srv1, json3\r\nceb Cebuano vtt, ttml, srv3, srv2, srv1, json3\r\nzh-Hans Chinese (Simplified) vtt, ttml, srv3, srv2, srv1, json3\r\nzh-Hant Chinese (Traditional) vtt, ttml, srv3, srv2, srv1, json3\r\nco Corsican vtt, ttml, srv3, srv2, srv1, json3\r\nhr Croatian vtt, ttml, srv3, srv2, srv1, json3\r\ncs Czech vtt, ttml, srv3, srv2, srv1, json3\r\nda Danish vtt, ttml, srv3, srv2, srv1, json3\r\ndv Divehi vtt, ttml, srv3, srv2, srv1, json3\r\nnl Dutch vtt, ttml, srv3, srv2, srv1, json3\r\nen-orig English (Original) vtt, ttml, srv3, srv2, srv1, json3\r\nen English vtt, ttml, srv3, srv2, srv1, json3\r\neo Esperanto vtt, ttml, srv3, srv2, srv1, json3\r\net Estonian vtt, ttml, srv3, srv2, srv1, json3\r\nee Ewe vtt, ttml, srv3, srv2, srv1, json3\r\nfil Filipino vtt, ttml, srv3, srv2, srv1, json3\r\nfi Finnish vtt, ttml, srv3, srv2, srv1, json3\r\nfr French vtt, ttml, srv3, srv2, srv1, json3\r\ngl Galician vtt, ttml, srv3, srv2, srv1, json3\r\nlg Ganda vtt, ttml, srv3, srv2, srv1, json3\r\nka Georgian vtt, ttml, srv3, srv2, srv1, json3\r\nde German vtt, ttml, srv3, srv2, srv1, json3\r\nel Greek vtt, ttml, srv3, srv2, srv1, json3\r\ngn Guarani vtt, ttml, srv3, srv2, srv1, json3\r\ngu Gujarati vtt, ttml, srv3, srv2, srv1, json3\r\nht Haitian Creole vtt, ttml, srv3, srv2, srv1, json3\r\nha Hausa vtt, ttml, srv3, srv2, srv1, json3\r\nhaw Hawaiian vtt, ttml, srv3, srv2, srv1, json3\r\niw Hebrew vtt, ttml, srv3, srv2, srv1, json3\r\nhi Hindi vtt, ttml, srv3, srv2, srv1, json3\r\nhmn Hmong vtt, ttml, srv3, srv2, srv1, json3\r\nhu Hungarian vtt, ttml, srv3, srv2, srv1, json3\r\nis Icelandic vtt, ttml, srv3, srv2, srv1, json3\r\nig Igbo vtt, ttml, srv3, srv2, srv1, json3\r\nid Indonesian vtt, ttml, srv3, srv2, srv1, json3\r\nga Irish vtt, ttml, srv3, srv2, srv1, json3\r\nit Italian vtt, ttml, srv3, srv2, srv1, json3\r\nja Japanese vtt, ttml, srv3, srv2, srv1, json3\r\njv Javanese vtt, ttml, srv3, srv2, srv1, json3\r\nkn Kannada vtt, ttml, srv3, srv2, srv1, json3\r\nkk Kazakh vtt, ttml, srv3, srv2, srv1, json3\r\nkm Khmer vtt, ttml, srv3, srv2, srv1, json3\r\nrw Kinyarwanda vtt, ttml, srv3, srv2, srv1, json3\r\nko Korean vtt, ttml, srv3, srv2, srv1, json3\r\nkri Krio vtt, ttml, srv3, srv2, srv1, json3\r\nku Kurdish vtt, ttml, srv3, srv2, srv1, json3\r\nky Kyrgyz vtt, ttml, srv3, srv2, srv1, json3\r\nlo Lao vtt, ttml, srv3, srv2, srv1, json3\r\nla Latin vtt, ttml, srv3, srv2, srv1, json3\r\nlv Latvian vtt, ttml, srv3, srv2, srv1, json3\r\nln Lingala vtt, ttml, srv3, srv2, srv1, json3\r\nlt Lithuanian vtt, ttml, srv3, srv2, srv1, json3\r\nlb Luxembourgish vtt, ttml, srv3, srv2, srv1, json3\r\nmk Macedonian vtt, ttml, srv3, srv2, srv1, json3\r\nmg Malagasy vtt, ttml, srv3, srv2, srv1, json3\r\nms Malay vtt, ttml, srv3, srv2, srv1, json3\r\nml Malayalam vtt, ttml, srv3, srv2, srv1, json3\r\nmt Maltese vtt, ttml, srv3, srv2, srv1, json3\r\nmi M\u0101ori vtt, ttml, srv3, srv2, srv1, json3\r\nmr Marathi vtt, ttml, srv3, srv2, srv1, json3\r\nmn Mongolian vtt, ttml, srv3, srv2, srv1, json3\r\nne Nepali vtt, ttml, srv3, srv2, srv1, json3\r\nnso Northern Sotho vtt, ttml, srv3, srv2, srv1, json3\r\nno Norwegian vtt, ttml, srv3, srv2, srv1, json3\r\nny Nyanja vtt, ttml, srv3, srv2, srv1, json3\r\nor Odia vtt, ttml, srv3, srv2, srv1, json3\r\nom Oromo vtt, ttml, srv3, srv2, srv1, json3\r\nps Pashto vtt, ttml, srv3, srv2, srv1, json3\r\nfa Persian vtt, ttml, srv3, srv2, srv1, json3\r\npl Polish vtt, ttml, srv3, srv2, srv1, json3\r\npt Portuguese vtt, ttml, srv3, srv2, srv1, json3\r\npa Punjabi vtt, ttml, srv3, srv2, srv1, json3\r\nqu Quechua vtt, ttml, srv3, srv2, srv1, json3\r\nro Romanian vtt, ttml, srv3, srv2, srv1, json3\r\nru Russian vtt, ttml, srv3, srv2, srv1, json3\r\nsm Samoan vtt, ttml, srv3, srv2, srv1, json3\r\nsa Sanskrit vtt, ttml, srv3, srv2, srv1, json3\r\ngd Scottish Gaelic vtt, ttml, srv3, srv2, srv1, json3\r\nsr Serbian vtt, ttml, srv3, srv2, srv1, json3\r\nsn Shona vtt, ttml, srv3, srv2, srv1, json3\r\nsd Sindhi vtt, ttml, srv3, srv2, srv1, json3\r\nsi Sinhala vtt, ttml, srv3, srv2, srv1, json3\r\nsk Slovak vtt, ttml, srv3, srv2, srv1, json3\r\nsl Slovenian vtt, ttml, srv3, srv2, srv1, json3\r\nso Somali vtt, ttml, srv3, srv2, srv1, json3\r\nst Southern Sotho vtt, ttml, srv3, srv2, srv1, json3\r\nes Spanish vtt, ttml, srv3, srv2, srv1, json3\r\nsu Sundanese vtt, ttml, srv3, srv2, srv1, json3\r\nsw Swahili vtt, ttml, srv3, srv2, srv1, json3\r\nsv Swedish vtt, ttml, srv3, srv2, srv1, json3\r\ntg Tajik vtt, ttml, srv3, srv2, srv1, json3\r\nta Tamil vtt, ttml, srv3, srv2, srv1, json3\r\ntt Tatar vtt, ttml, srv3, srv2, srv1, json3\r\nte Telugu vtt, ttml, srv3, srv2, srv1, json3\r\nth Thai vtt, ttml, srv3, srv2, srv1, json3\r\nti Tigrinya vtt, ttml, srv3, srv2, srv1, json3\r\nts Tsonga vtt, ttml, srv3, srv2, srv1, json3\r\ntr Turkish vtt, ttml, srv3, srv2, srv1, json3\r\ntk Turkmen vtt, ttml, srv3, srv2, srv1, json3\r\nuk Ukrainian vtt, ttml, srv3, srv2, srv1, json3\r\nur Urdu vtt, ttml, srv3, srv2, srv1, json3\r\nug Uyghur vtt, ttml, srv3, srv2, srv1, json3\r\nuz Uzbek vtt, ttml, srv3, srv2, srv1, json3\r\nvi Vietnamese vtt, ttml, srv3, srv2, srv1, json3\r\ncy Welsh vtt, ttml, srv3, srv2, srv1, json3\r\nfy Western Frisian vtt, ttml, srv3, srv2, srv1, json3\r\nxh Xhosa vtt, ttml, srv3, srv2, srv1, json3\r\nyi Yiddish vtt, ttml, srv3, srv2, srv1, json3\r\nyo Yoruba vtt, ttml, srv3, srv2, srv1, json3\r\nzu Zulu vtt, ttml, srv3, srv2, srv1, json3\r\ni6kccBc-FBQ has no subtitles\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '91302ed349f34dc26cc1d661bb45a4b71f4417f7', 'files': [{'path': 'yt_dlp/options.py', 'Loc': {\"(None, 'create_parser', 216)\": {'mod': [853, 857, 861]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0\n\u8fd9\u4e2a\u53ef\u4e0d\u7b97\uff0c\u56e0\u4e3auser\u77e5\u9053\u547d\u4ee4\u53ea\u662f\u5f15\u53f7\u95ee\u9898", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/options.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "6075a029dba70a89675ae1250e7cdfd91f0eba41", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/10356", "iss_label": "question", "title": "Unable to install curl_cffi", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nI am trying to install `curl_cffi` in order to get around Vimeo's new TLS fingerprinting anti-bot protection. I have run the command `pipx install 'yt-dlp[default,curl_cffi]' --force'`, which gives the output:\r\n\r\n```\r\nInstalling to existing venv 'yt-dlp'\r\n\u26a0\ufe0f Note: yt-dlp was already on your PATH at /opt/homebrew/bin/yt-dlp\r\n installed package yt-dlp 2024.7.2, installed using Python 3.12.4\r\n These apps are now globally available\r\n - yt-dlp\r\n These manual pages are now globally available\r\n - man1/yt-dlp.1\r\n\u26a0\ufe0f Note: '/Users/username-hidden/.local/bin' is not on your PATH environment variable. These apps will not be globally accessible until your PATH is updated. Run `pipx ensurepath` to automatically add it,\r\n or manually modify your PATH in your shell's config file (e.g. ~/.bashrc).\r\ndone! \u2728 \ud83c\udf1f \u2728\r\n```\r\n\r\nFrom this output, I understand that `curl_cffi` would have been installed. However, running `yt-dlp --list-impersonate-targets -vU` does not show it.\r\n\r\nI intend to use `--impersonate chrome` but I am stuck at `curl_cffi` installation. Any help would be **greatly** appreciated. Thank you.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['--list-impersonate-targets', '-vU']\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.07.02 from yt-dlp/yt-dlp [93d33cb29] (pip)\r\n[debug] Python 3.12.4 (CPython arm64 64bit) - macOS-14.5-arm64-arm-64bit (OpenSSL 3.3.1 4 Jun 2024)\r\n[debug] exe versions: ffmpeg 7.0.1 (setts), ffprobe 7.0.1\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.06.02, mutagen-1.47.0, requests-2.32.3, sqlite3-3.46.0, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1831 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\nLatest version: stable@2024.07.02 from yt-dlp/yt-dlp\r\nyt-dlp is up to date (stable@2024.07.02 from yt-dlp/yt-dlp)\r\n[info] Available impersonate targets\r\nClient OS Source\r\n---------------------------------------\r\nChrome - curl_cffi (not available)\r\nEdge - curl_cffi (not available)\r\nSafari - curl_cffi (not available)\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".zshrc", ".bash_profile"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".bash_profile", ".zshrc"], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "4a601c9eff9fb42e24a4c8da3fa03628e035b35b", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/8479", "iss_label": "question\nNSFW", "title": "OUTPUT TEMPLATE --output %(title)s.%(ext)s", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I'm running yt-dlp version **2023.10.13** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\nI'm using the latest yt-dlp which states does not support website https://sfmcompile.club/. Understood.\r\nIssue: The pages appear to just be playlist of others' posts. A series of pages may take the format below:\r\n\r\n**_### LINKS ARE NSFW_**\r\nhttps://sfmcompile.club/category/overwatch/dva/page/2/\r\nhttps://sfmcompile.club/category/overwatch/dva/page/3/\r\nhttps://sfmcompile.club/category/overwatch/dva/page/4/\r\n**_### LINKS ARE NSFW_**\r\n\r\nI'm copying/pasting to a text file, the link's base, adding the page number, then the trailing slash. After having a series of these weblinks, I run yt-dlp against this text file. Each weblink contains about 8 posts per page. yt-dlp downloads the 8 posts for that page.\r\nDVA (1)\r\nDVA (2)\r\nDVA (3)\r\nDVA (4)\r\nDVA (5)\r\nDVA (6)\r\nDVA (7)\r\nDVA (8)\r\n\r\nyt-dlp then goes to the next weblink in the text file and \"reports\" the file has already been downloaded:\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nand again,\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nand again,\r\nDVA (1)\r\nDVA (2)\r\netc.\r\n\r\nIt repeats with whatever number of weblinks in the text file until exhausted. I might be trying to download 8 weblinks multiplied by 8 posts which should be 64, but is instead only the original 8 from the first page.\r\n\r\nI understand I can add something like %(autonumber)s to the output but each of these posts in the playlists do have an actual title to them.\r\nDVA eating lunch\r\nDVA at the park\r\nDVA at work\r\n(lol)\r\n\r\nI'd prefer to use the original title of the post rather than repeating title with a follow-on count.\r\nDVA (1) 00001\r\nDVA (2) 00002\r\nDVA (3) 00003\r\nDVA (4) 00004\r\nDVA (5) 00005\r\nDVA (6) 00006\r\nDVA (7) 00007\r\nDVA (8) 00008\r\n\r\nDVA (1) 00009\r\nDVA (2) 00010\r\netc.\r\n\r\nI've experimented with using most of the OUTPUT TEMPLATE options on the yt-dlp page but can't for the life of me seem to figure out which output string is going to give me the output I desire. Most of them give me **NA**.\r\n\r\nid (string): Video identifier\r\ntitle (string): Video title\r\nfulltitle (string): Video title ignoring live timestamp and generic title\r\next (string): Video filename extension\r\nalt_title (string): A secondary title of the video\r\ndescription (string): The description of the video\r\ndisplay_id (string): An alternative identifier for the video\r\n\r\nEven tried %(original_url)s w/ no luck, thinking I could at least get the https://www.blahblahblah.com, and then afterward use a mass filename editor to edit out the unwanted https:// and .com. Nope, get an NA.\r\n\r\n**If there is a way to \"poll\" a weblink to see \"keywords\" that would be great!**\r\n\r\nIn advance, any help is appreciated.\r\n\r\nMy yt-dlp.conf\r\n\r\n```\r\n--no-download-archive\r\n--no-clean-info-json\r\n--windows-filenames\r\n--trim-filenames 140\r\n--ffmpeg-location \"..\\..\\..\\..\\ffmpeg\\bin\\ffmpeg.exe\"\r\n--audio-format \"mp3\"\r\n--format \"bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4\"\r\n--output \"D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\%(title)s.%(ext)s\"\r\n```\r\n\r\n```\r\nG:\\00OSz\\12win10b zEnt-LTSC 1809 x64\\05Apps\\Multimedia\\Video\\Installed\\yt-dlp Singles\\Support\\Folder Prep\\aX Drive Source>\"..\\..\\..\\yt-dlp.exe\" --config-location \"..\\..\\..\\yt-dlp.conf\" --batch-file \".\\aBatch URLs.txt\" --verbose\r\n[debug] Command-line config: ['--config-location', '..\\\\..\\\\..\\\\yt-dlp.conf', '--batch-file', '.\\\\aBatch URLs.txt', '--verbose']\r\n[debug] | Config \"..\\..\\..\\yt-dlp.conf\": ['--no-download-archive', '--no-clean-info-json', '--windows-filenames', '--trim-filenames', '140', '--ffmpeg-location', '..\\\\..\\\\..\\\\..\\\\ffmpeg\\\\bin\\\\ffmpeg.exe', '--audio-format', 'mp3', '--format', 'bestvideo[ext=mp4]+bestaudio[ext=m4a]/mp4', '--output', 'D:\\\\11Downloadz\\\\bTorrents Complete\\\\Podcasts\\\\tmp in\\\\%(title)s.%(ext)s']\r\n[debug] Batch file urls: ['https://sfmcompile.club/tag/lazyprocrastinator/page/1/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/2/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/3/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/4/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/5/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/6/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/7/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/8/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/9/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/10/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/11/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/12/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/13/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/14/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/15/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/16/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/17/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/18/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/19/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/20/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/21/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/22/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/23/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/24/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/25/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/26/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/27/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/28/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/29/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/30/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/31/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/32/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/33/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/34/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/35/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/36/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/37/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/38/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/39/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/40/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/41/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/42/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/43/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/44/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/45/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/46/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/47/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/48/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/49/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/50/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/51/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/52/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/53/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/54/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/55/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/56/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/57/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/58/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/59/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/60/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/61/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/62/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/63/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/64/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/65/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/66/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/67/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/68/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/69/', 'https://sfmcompile.club/tag/lazyprocrastinator/page/70/']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2023.09.24.003044 [de015e930] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg N-110072-g073ec3b9da-20230325 (setts), ffprobe N-110072-g073ec3b9da-20230325\r\n[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.07.22, mutagen-1.47.0, sqlite3-3.35.5, websockets-11.0.3\r\n[debug] Proxy map: {}\r\n[debug] Loaded 1886 extractors\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/page/1/\r\n[generic] 1: Downloading webpage\r\n[redirect] Following redirect to https://sfmcompile.club/tag/lazyprocrastinator/\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/\r\n[generic] lazyprocrastinator: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] lazyprocrastinator: Extracting information\r\n[debug] Looking for embeds\r\n[debug] Identified 8 html5 embeds\r\n[download] Downloading playlist: LazyProcrastinator Archives\r\n[generic] Playlist LazyProcrastinator Archives: Downloading 8 items of 8\r\n[download] Downloading item 1 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-1: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-blowjob-pov-Sound-update.mp4\"\r\n[debug] File locking is not supported. Proceeding without locking\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (1).mp4\r\n[download] 100% of 2.66MiB in 00:00:00 at 5.05MiB/s\r\n[download] Downloading item 2 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-2: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Hood-Lunafreya-spooning-fuck-Sound-update.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (2).mp4\r\n[download] 100% of 3.53MiB in 00:00:00 at 6.47MiB/s\r\n[download] Downloading item 3 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-3: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Bunny-Serah-Farron-sideway-proneboned.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (3).mp4\r\n[download] 100% of 3.09MiB in 00:00:00 at 6.05MiB/s\r\n[download] Downloading item 4 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-4: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Bunny-Serah-Farron-sideway-fucked.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (4).mp4\r\n[download] 100% of 2.97MiB in 00:00:00 at 5.50MiB/s\r\n[download] Downloading item 5 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-5: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Sadako-caught-on-tape.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (5).mp4\r\n[download] 100% of 1.77MiB in 00:00:00 at 4.34MiB/s\r\n[download] Downloading item 6 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-6: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Hood-Lunafreya-mating-press-Sound-update.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (6).mp4\r\n[download] 100% of 2.65MiB in 00:00:00 at 4.40MiB/s\r\n[download] Downloading item 7 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-7: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-hand-holding-cowgirl.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (7).mp4\r\n[download] 100% of 1.67MiB in 00:00:00 at 4.73MiB/s\r\n[download] Downloading item 8 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] lazyprocrastinator-8: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-dangerous-handjob-pov.mp4\"\r\n[download] Destination: D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (8).mp4\r\n[download] 100% of 4.85MiB in 00:00:00 at 4.86MiB/s\r\n[download] Finished downloading playlist: LazyProcrastinator Archives\r\n[generic] Extracting URL: https://sfmcompile.club/tag/lazyprocrastinator/page/2/\r\n[generic] 2: Downloading webpage\r\nWARNING: [generic] Falling back on generic information extractor\r\n[generic] 2: Extracting information\r\n[debug] Looking for embeds\r\n[debug] Identified 8 html5 embeds\r\n[download] Downloading playlist: LazyProcrastinator Archives\r\n[generic] Playlist LazyProcrastinator Archives: Downloading 8 items of 8\r\n[download] Downloading item 1 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-1: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-dangerous-thighjob-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (1).mp4 has already been downloaded\r\n[download] 100% of 2.66MiB\r\n[download] Downloading item 2 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-2: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-face-sitting-and-feetjob.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (2).mp4 has already been downloaded\r\n[download] 100% of 3.53MiB\r\n[download] Downloading item 3 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-3: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-heel-torture-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (3).mp4 has already been downloaded\r\n[download] 100% of 3.09MiB\r\n[download] Downloading item 4 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-4: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Ashley-Graham-cowgirl-riding-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (4).mp4 has already been downloaded\r\n[download] 100% of 2.97MiB\r\n[download] Downloading item 5 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-5: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Red-Riding-Lunafreya-lifted-anal-Sound-update.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (5).mp4 has already been downloaded\r\n[download] 100% of 1.77MiB\r\n[download] Downloading item 6 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-6: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-sucking-nip-and-handjob-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (6).mp4 has already been downloaded\r\n[download] 100% of 2.65MiB\r\n[download] Downloading item 7 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-7: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/Infected-2B-reverse-cowgirl-ride-pov.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (7).mp4 has already been downloaded\r\n[download] 100% of 1.67MiB\r\n[download] Downloading item 8 of 8\r\n[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id\r\n[info] 2-8: Downloading 1 format(s): 0\r\n[debug] Invoking http downloader on \"https://sfmcompile.club/wp-content/uploads/2023/10/2B-thighs-crushing-and-handjob.mp4\"\r\n[download] D:\\11Downloadz\\bTorrents Complete\\Podcasts\\tmp in\\LazyProcrastinator Archives (8).mp4 has already been downloaded\r\n```\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\r\n- [x] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["yt-dlp.conf"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": ["yt-dlp.conf"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "a903d8285c96b2c7ac7915f228a17e84cbfe3ba4", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/1238", "iss_label": "question", "title": "[Question] How to use Sponsorblock as part of Python script", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:\r\n- Look through the README (https://github.com/yt-dlp/yt-dlp)\r\n- Read \"opening an issue\" section in CONTRIBUTING.md: https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue\r\n- Search the bugtracker for similar questions: https://github.com/yt-dlp/yt-dlp/issues\r\n- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)\r\n-->\r\n\r\n- [x] I'm asking a question\r\n- [x] I've looked through the README\r\n- [x] I've read the opening an issue section in CONTRIBUTING.md\r\n- [x] I've searched the bugtracker for similar questions including closed ones\r\n- [x] I have given an appropriate title to the issue\r\n\r\n\r\n## Question\r\n\r\n<!--\r\nAsk your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/yt-dlp/yt-dlp.\r\n-->\r\n\r\nWhat are the relevant `ydl_opts` to use Sponsorblock with yt-dlp as part of a Python script?\r\n\r\n[README.md](https://github.com/yt-dlp/yt-dlp/blob/master/README.md#sponsorblock-options) documents usage on the command line and [yt_dlp/YoutubeDL.py](https://github.com/yt-dlp/yt-dlp/blob/master/yt_dlp/YoutubeDL.py) doesn't mention Sponsorblock at all.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a903d8285c96b2c7ac7915f228a17e84cbfe3ba4', 'files': [{'path': 'yt_dlp/__init__.py', 'Loc': {\"(None, '_real_main', 62)\": {'mod': [427, 501]}}, 'status': 'modified'}, {'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "8531d2b03bac9cc746f2ee8098aaf8f115505f5b", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/10462", "iss_label": "question", "title": "Cookie not loading when downloading instagram videos", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\r\n\r\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\r\n\r\n### Checklist\r\n\r\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\r\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\r\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\r\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\r\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\r\n\r\n### Please make sure the question is worded well enough to be understood\r\n\r\nI tried to download instagram videos with this code but the cookie does not load.\r\n\r\nBut with ```yt-dlp --cookies instagram_cookie.txt \"https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\"``` it does.\r\nIs there something wrong with my code? If so, please let me know the solution.\r\nSorry if I have missed something.\r\n\r\n```\r\nfrom yt_dlp import YoutubeDL\r\nimport subprocess\r\n\r\ndef download_video(url):\r\n if url in \".m3u8\":\r\n subprocess.run(f'ffmpeg -i {url} -c copy \"%name%.mp4\"', shell=True)\r\n print(\"m3u8\u30d5\u30a1\u30a4\u30eb\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u307e\u3057\u305f\")\r\n else:\r\n ydl_opts = {\r\n 'format': 'best[ext=mp4]',\r\n 'outtmpl': '%(title)s.%(ext)s',\r\n 'verbose': True,\r\n }\r\n\r\n if \"instagram.com\" in url:\r\n ydl_opts[\"cookies\"] = \"instagram_cookie.txt\"\r\n print(ydl_opts)\r\n \r\n with YoutubeDL(ydl_opts) as ydl:\r\n result = ydl.extract_info(url, download=True)\r\n file_path = ydl.prepare_filename(result)\r\n print(f\"{file_path}\u3092\u30c0\u30a6\u30f3\u30ed\u30fc\u30c9\u307e\u3057\u305f\")\r\n \r\n return file_path\r\n\r\nif __name__ == \"__main__\":\r\n download_video(input(\"URL:\"))\r\n```\r\n\r\n### Provide verbose output that clearly demonstrates the problem\r\n\r\n- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\r\n- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead\r\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\r\n\r\n### Complete Verbose Output\r\n\r\n```shell\r\nURL:https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\r\n{'format': 'best[ext=mp4]', 'outtmpl': '%(title)s.%(ext)s', 'verbose': True, 'cookies': 'instagram_cookie.txt'}\r\n[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version nightly@2024.07.13.232701 from yt-dlp/yt-dlp-nightly-builds [150ecc45d] (pip) API\r\n[debug] params: {'format': 'best[ext=mp4]', 'outtmpl': '%(title)s.%(ext)s', 'verbose': True, 'cookies': 'instagram_cookie.txt', 'compat_opts': set(), 'http_headers': {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.74 Safari/537.36', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', 'Accept-Language': 'en-us,en;q=0.5', 'Sec-Fetch-Mode': 'navigate'}}\r\n[debug] Python 3.10.14 (CPython x86_64 64bit) - Linux-6.5.0-1023-gcp-x86_64-with-glibc2.39 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.39)\r\n[debug] exe versions: none\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets\r\n[debug] Loaded 1834 extractors\r\n[Instagram] Extracting URL: https://www.instagram.com/p/C9SEsmYCx_M/?hl=ja\r\n[Instagram] C9SEsmYCx_M: Setting up session\r\n[Instagram] C9SEsmYCx_M: Downloading JSON metadata\r\nWARNING: [Instagram] C9SEsmYCx_M: General metadata extraction failed (some metadata might be missing).\r\n[Instagram] C9SEsmYCx_M: Downloading webpage\r\nWARNING: [Instagram] unable to extract shared data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\nWARNING: [Instagram] Main webpage is locked behind the login page. Retrying with embed webpage (some metadata might be missing).\r\n[Instagram] C9SEsmYCx_M: Downloading embed webpage\r\nWARNING: [Instagram] unable to extract additional data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U\r\nERROR: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 740, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/instagram.py\", line 460, in _real_extract\r\n self.raise_login_required('Requested content is not available, rate-limit reached or login required')\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 1245, in raise_login_required\r\n raise ExtractorError(msg, expected=True)\r\n\r\nTraceback (most recent call last):\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1622, in wrapper\r\n return func(self, *args, **kwargs)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1757, in __extract_info\r\n ie_result = ie.extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 740, in extract\r\n ie_result = self._real_extract(url)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/instagram.py\", line 460, in _real_extract\r\n self.raise_login_required('Requested content is not available, rate-limit reached or login required')\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/extractor/common.py\", line 1245, in raise_login_required\r\n raise ExtractorError(msg, expected=True)\r\nyt_dlp.utils.ExtractorError: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/runner/moive-download-exe/main.py\", line 27, in <module>\r\n download_video(input(\"URL:\"))\r\n File \"/home/runner/moive-download-exe/main.py\", line 20, in download_video\r\n result = ydl.extract_info(url, download=True)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1611, in extract_info\r\n return self.__extract_info(url, self.get_info_extractor(key), download, extra_info, process)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1640, in wrapper\r\n self.report_error(str(e), e.format_traceback())\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1088, in report_error\r\n self.trouble(f'{self._format_err(\"ERROR:\", self.Styles.ERROR)} {message}', *args, **kwargs)\r\n File \"/home/runner/moive-download-exe/.pythonlibs/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py\", line 1027, in trouble\r\n raise DownloadError(message, exc_info)\r\nyt_dlp.utils.DownloadError: ERROR: [Instagram] C9SEsmYCx_M: Requested content is not available, rate-limit reached or login required. Use --cookies, --cookies-from-browser, --username and --password, --netrc-cmd, or --netrc (instagram) to provide account credentials\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8531d2b03bac9cc746f2ee8098aaf8f115505f5b', 'files': [{'path': 'yt_dlp/YoutubeDL.py', 'Loc': {\"('YoutubeDL', None, 189)\": {'mod': [335]}}, 'status': 'modified'}, {'path': 'yt_dlp/__init__.py', 'Loc': {\"(None, 'parse_options', 737)\": {'mod': [901]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/__init__.py", "yt_dlp/YoutubeDL.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "yt-dlp", "repo_name": "yt-dlp", "base_commit": "e59c82a74cda5139eb3928c75b0bd45484dbe7f0", "iss_html_url": "https://github.com/yt-dlp/yt-dlp/issues/11152", "iss_label": "question", "title": "How to use --merge-output-format?", "body": "### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE\n\n- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\\* field\n\n### Checklist\n\n- [X] I'm asking a question and **not** reporting a bug or requesting a feature\n- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)\n- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))\n- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates\n- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)\n\n### Please make sure the question is worded well enough to be understood\n\nHello,\r\n\r\nThis is the first time I'm trying to use the \"--merge-output-format\" option to download and merge a video stream with an audio stream\u2026 and it failed:\r\n\r\n```\r\nyoutube-dlp.exe -qF\r\nyoutube-dlp.exe -f '160+140' --merge-output-format mp4 https://www.youtube.com/watch?v=123ABC\r\nRequested format is not available. Use --list-formats for a list of available formats\r\n```\r\n\r\nWhat is the right way to use that switch?\r\n\r\nThank you.\n\n### Provide verbose output that clearly demonstrates the problem\n\n- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)\n- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead\n- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below\n\n### Complete Verbose Output\n\n```shell\n[debug] Command-line config: ['-vU']\r\n[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8\r\n[debug] yt-dlp version stable@2024.08.06 from yt-dlp/yt-dlp [4d9231208] (win_exe)\r\n[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1k 25 Mar 2021)\r\n[debug] exe versions: ffmpeg 2024-06-13-git-0060a368b1-essentials_build-www.gyan.dev (setts), ffprobe 2024-06-13-git-0060a368b1-essentials_build-www.gyan.dev\r\n[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.07.04, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.35.5, urllib3-2.2.2, websockets-12.0\r\n[debug] Proxy map: {}\r\n[debug] Request Handlers: urllib, requests, websockets, curl_cffi\r\n[debug] Loaded 1830 extractors\r\n[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest\r\n[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec\r\n[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/download/2024.09.27/SHA2-256SUMS\r\nCurrent version: stable@2024.08.06 from yt-dlp/yt-dlp\r\nLatest version: stable@2024.09.27 from yt-dlp/yt-dlp\r\nCurrent Build Hash: 468a6f8bf1d156ad173e000a40f696d4fbd69c5aa7360229329b9063a388e7d0\r\nUpdating to stable@2024.09.27 from yt-dlp/yt-dlp ...\r\n[debug] Downloading yt-dlp.exe from https://github.com/yt-dlp/yt-dlp/releases/download/2024.09.27/yt-dlp.exe\r\nUpdated yt-dlp to stable@2024.09.27 from yt-dlp/yt-dlp\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e59c82a74cda5139eb3928c75b0bd45484dbe7f0', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 1430)': {'mod': [1430]}}, 'status': 'modified'}, {'path': 'yt_dlp/options.py', 'Loc': {\"(None, 'create_parser', 219)\": {'mod': [786, 790]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["yt_dlp/options.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f18ebbd31645437afaa9738fcf2b5ed8b48cb021", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/6177", "iss_label": "Feature", "title": "Workflow that can follow different paths and skip some of them.", "body": "### Feature Idea\n\nHi.\r\nI am very interested in the ability to create a workflow that can follow different paths and skip some if they are not needed.\r\n\r\nFor example, I want to create an image and save it under a fixed name (unique). But tomorrow (or after restart) I want to run this workflow again and work with the already created image, which I created and saved earlier, and not waste time on its creation (upscale, modification, etc.), but just check if this image is in my folder, and if it is, then just load it and work with the loaded image, and the branch that creates the image will not run at all (skip this branch).\r\nBut it's important that the script does this by itself (without MUTE or BYPASS).\r\n\r\nExample \r\n![Screenshot_1](https://github.com/user-attachments/assets/840c86a0-7944-49ca-95fd-15825a632c7f)\r\n\r\nThis will help save a lot of time on complex workflows that need to be improved or modernized. And it can also save resources in case of a break or lack of memory - it will be possible to skip large parts of the scheme if they have already been made and saved (without keeping in memory models that have already worked).\r\n\n\n### Existing Solutions\n\nI've been trying for a long time to find out if such a possibility exists, but I couldn't find it. If such a feature is already implemented, where can I find it? Thanks.\n\n### Other\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ltdrdata", "pro": "ComfyUI-extension-tutorials", "path": ["ComfyUI-Impact-Pack/tutorial/switch.md"]}], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["ComfyUI-Impact-Pack/tutorial/switch.md"], "test": [], "config": [], "asset": []}} +{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "834ab278d2761c452f8e76c83fb62d8f8ce39301", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/1064", "iss_label": "", "title": "Error occurred when executing CLIPVisionEncode", "body": "Hi there, \r\nsomehow i cant get unCLIP to work \r\n\r\nThe .png has the unclip example workflow i tried out, but it gets stuck in the CLIPVisionEncode Module.\r\nWhat can i do to solve this? \r\n\r\nError occurred when executing CLIPVisionEncode:\r\n\r\n'NoneType' object has no attribute 'encode_image'\r\n\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 144, in recursive_execute\r\noutput_data, output_ui = get_output_data(obj, input_data_all)\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 74, in get_output_data\r\nreturn_values = map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True)\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 67, in map_node_over_list\r\nresults.append(getattr(obj, func)(**slice_dict(input_data_all, i)))\r\nFile \"D:\\ComfyUI_windows_portable_nvidia_cu118_or_cpu\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 742, in encode\r\noutput = clip_vision.encode_image(image)\r\n\r\n\r\n\r\n![unclip_2pass](https://github.com/comfyanonymous/ComfyUI/assets/141161676/51b5ed7c-d5d9-4b88-a973-a54882039653)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '834ab278d2761c452f8e76c83fb62d8f8ce39301', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 30)': {'mod': [30]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "model\n+\nDoc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "3c60ecd7a83da43d694e26a77ca6b93106891251", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/5229", "iss_label": "User Support", "title": "Problem with ComfyUI workflow \"ControlNetApplySD3 'NoneType' object has no attribute 'copy'\"", "body": "### Your question\n\nI get the following error when running the workflow\r\n\r\nI leave a video of what I am working on as a reference.\r\n\r\nhttps://www.youtube.com/watch?v=MbQv8zoNEfY\r\n\r\nvideo of reference\n\n### Logs\n\n```powershell\n# ComfyUI Error Report\r\n## Error Details\r\n- **Node Type:** ControlNetApplySD3\r\n- **Exception Type:** AttributeError\r\n- **Exception Message:** 'NoneType' object has no attribute 'copy'\r\n## Stack Trace\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\n\r\n```\r\n## System Information\r\n- **ComfyUI Version:** v0.2.3-3-g6632365\r\n- **Arguments:** ComfyUI\\main.py --windows-standalone-build\r\n- **OS:** nt\r\n- **Python Version:** 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]\r\n- **Embedded Python:** true\r\n- **PyTorch Version:** 2.4.1+cu124\r\n## Devices\r\n\r\n- **Name:** cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n - **Type:** cuda\r\n - **VRAM Total:** 25769148416\r\n - **VRAM Free:** 19327837688\r\n - **Torch VRAM Total:** 5100273664\r\n - **Torch VRAM Free:** 57107960\r\n\r\n## Logs\r\n```\r\n2024-10-12 11:47:24,318 - root - INFO - Total VRAM 24575 MB, total RAM 65461 MB\r\n2024-10-12 11:47:24,318 - root - INFO - pytorch version: 2.4.1+cu124\r\n2024-10-12 11:47:24,318 - root - INFO - Set vram state to: NORMAL_VRAM\r\n2024-10-12 11:47:24,318 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n2024-10-12 11:47:26,738 - root - INFO - Using pytorch cross attention\r\n2024-10-12 11:47:32,778 - root - INFO - [Prompt Server] web root: D:\\ComfyUI_windows_portable\\ComfyUI\\web\r\n2024-10-12 11:47:36,818 - root - INFO - Total VRAM 24575 MB, total RAM 65461 MB\r\n2024-10-12 11:47:36,818 - root - INFO - pytorch version: 2.4.1+cu124\r\n2024-10-12 11:47:36,818 - root - INFO - Set vram state to: NORMAL_VRAM\r\n2024-10-12 11:47:36,818 - root - INFO - Device: cuda:0 NVIDIA GeForce RTX 3090 : cudaMallocAsync\r\n2024-10-12 11:47:37,468 - root - INFO - \r\nImport times for custom nodes:\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\websocket_image_save.py\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\cg-use-everywhere\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI_UltimateSDUpscale\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\rgthree-comfy\r\n2024-10-12 11:47:37,468 - root - INFO - 0.0 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-KJNodes\r\n2024-10-12 11:47:37,468 - root - INFO - 0.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI_essentials\r\n2024-10-12 11:47:37,468 - root - INFO - 0.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\comfyui_controlnet_aux\r\n2024-10-12 11:47:37,468 - root - INFO - 0.3 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-eesahesNodes\r\n2024-10-12 11:47:37,468 - root - INFO - 0.4 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Manager\r\n2024-10-12 11:47:37,468 - root - INFO - 0.4 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-Impact-Pack\r\n2024-10-12 11:47:37,468 - root - INFO - 1.1 seconds: D:\\ComfyUI_windows_portable\\ComfyUI\\custom_nodes\\ComfyUI-AdvancedLivePortrait\r\n2024-10-12 11:47:37,468 - root - INFO - \r\n2024-10-12 11:47:37,478 - root - INFO - Starting server\r\n\r\n2024-10-12 11:47:37,478 - root - INFO - To see the GUI go to: http://127.0.0.1:8188\r\n2024-10-12 12:16:10,093 - root - INFO - got prompt\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 147:\r\n2024-10-12 12:16:10,103 - root - ERROR - * UpscaleModelLoader 83:\r\n2024-10-12 12:16:10,103 - root - ERROR - - Value not in list: model_name: '4x-ClearRealityV1.pth' not in ['ClearRealityV1\\\\4x-ClearRealityV1.pth', 'ClearRealityV1\\\\4x-ClearRealityV1_Soft.pth', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp32.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp32.bin']\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 321:\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 311:\r\n2024-10-12 12:16:10,103 - root - ERROR - * InstantX Flux Union ControlNet Loader 334:\r\n2024-10-12 12:16:10,103 - root - ERROR - - Value not in list: control_net_name: 'flux\\InstantX_flux.safetensors' not in ['flux-canny-controlnet-v3.safetensors', 'flux-canny-controlnet.safetensors', 'flux-canny-controlnet_v2.safetensors', 'flux-depth-controlnet-v3.safetensors', 'flux-depth-controlnet.safetensors', 'flux-depth-controlnet_v2.safetensors', 'flux-hed-controlnet-v3.safetensors', 'flux-hed-controlnet.safetensors', 'flux\\\\diffusion_pytorch_model.safetensors']\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,103 - root - ERROR - Failed to validate prompt for output 301:\r\n2024-10-12 12:16:10,103 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 140:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 320:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 145:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 319:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 179:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 84:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 258:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,108 - root - ERROR - Failed to validate prompt for output 299:\r\n2024-10-12 12:16:10,108 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 138:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 146:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 322:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 317:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 323:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 316:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 300:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 87:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 318:\r\n2024-10-12 12:16:10,113 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,113 - root - ERROR - Failed to validate prompt for output 141:\r\n2024-10-12 12:16:10,118 - root - ERROR - Output will be ignored\r\n2024-10-12 12:16:10,647 - root - INFO - Using pytorch attention in VAE\r\n2024-10-12 12:16:10,647 - root - INFO - Using pytorch attention in VAE\r\n2024-10-12 12:16:18,202 - root - INFO - model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16\r\n2024-10-12 12:16:18,202 - root - INFO - model_type FLUX\r\n2024-10-12 12:27:11,335 - root - ERROR - error could not detect control model type.\r\n2024-10-12 12:27:11,335 - root - ERROR - error checkpoint does not contain controlnet or t2i adapter data D:\\ComfyUI_windows_portable\\ComfyUI\\models\\controlnet\\flux\\diffusion_pytorch_model.safetensors\r\n2024-10-12 12:27:13,290 - root - INFO - Requested to load FluxClipModel_\r\n2024-10-12 12:27:13,294 - root - INFO - Loading 1 new model\r\n2024-10-12 12:27:13,301 - root - INFO - loaded completely 0.0 4777.53759765625 True\r\n2024-10-12 12:27:51,099 - root - WARNING - clip missing: ['text_projection.weight']\r\n2024-10-12 12:27:52,730 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy'\r\n2024-10-12 12:27:52,745 - root - ERROR - Traceback (most recent call last):\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\nAttributeError: 'NoneType' object has no attribute 'copy'\r\n\r\n2024-10-12 12:27:52,750 - root - INFO - Prompt executed in 702.63 seconds\r\n2024-10-12 12:44:26,904 - root - INFO - got prompt\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 147:\r\n2024-10-12 12:44:26,917 - root - ERROR - * UpscaleModelLoader 83:\r\n2024-10-12 12:44:26,917 - root - ERROR - - Value not in list: model_name: '4x-ClearRealityV1.pth' not in ['ClearRealityV1\\\\4x-ClearRealityV1.pth', 'ClearRealityV1\\\\4x-ClearRealityV1_Soft.pth', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1-fp32.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp16.bin', 'ClearRealityV1\\\\BROKEN_NCNN\\\\4x-ClearRealityV1_Soft-fp32.bin']\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 321:\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 311:\r\n2024-10-12 12:44:26,917 - root - ERROR - * InstantX Flux Union ControlNet Loader 334:\r\n2024-10-12 12:44:26,917 - root - ERROR - - Value not in list: control_net_name: 'flux\\InstantX_flux.safetensors' not in ['flux-canny-controlnet-v3.safetensors', 'flux-canny-controlnet.safetensors', 'flux-canny-controlnet_v2.safetensors', 'flux-depth-controlnet-v3.safetensors', 'flux-depth-controlnet.safetensors', 'flux-depth-controlnet_v2.safetensors', 'flux-hed-controlnet-v3.safetensors', 'flux-hed-controlnet.safetensors', 'flux\\\\diffusion_pytorch_model.safetensors']\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 301:\r\n2024-10-12 12:44:26,917 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,917 - root - ERROR - Failed to validate prompt for output 140:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 320:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 145:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 319:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 179:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 84:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 258:\r\n2024-10-12 12:44:26,922 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,922 - root - ERROR - Failed to validate prompt for output 299:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 138:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 146:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 322:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 317:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 323:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 316:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 300:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 87:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,927 - root - ERROR - Failed to validate prompt for output 318:\r\n2024-10-12 12:44:26,927 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,932 - root - ERROR - Failed to validate prompt for output 141:\r\n2024-10-12 12:44:26,932 - root - ERROR - Output will be ignored\r\n2024-10-12 12:44:26,992 - root - ERROR - !!! Exception during processing !!! 'NoneType' object has no attribute 'copy'\r\n2024-10-12 12:44:26,992 - root - ERROR - Traceback (most recent call last):\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 323, in execute\r\n output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 198, in get_output_data\r\n return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 169, in _map_node_over_list\r\n process_inputs(input_dict, i)\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\execution.py\", line 158, in process_inputs\r\n results.append(getattr(obj, func)(**inputs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\ComfyUI_windows_portable\\ComfyUI\\nodes.py\", line 848, in apply_controlnet\r\n c_net = control_net.copy().set_cond_hint(control_hint, strength, (start_percent, end_percent), vae=vae, extra_concat=extra_concat)\r\n ^^^^^^^^^^^^^^^^\r\nAttributeError: 'NoneType' object has no attribute 'copy'\r\n\r\n2024-10-12 12:44:26,997 - root - INFO - Prompt executed in 0.06 seconds\r\n```\r\n## Attached Workflow\r\nPlease make sure that workflow does not contain any sensitive information such as API keys or passwords.\r\n```\r\nWorkflow too large. Please manually upload the workflow from local file system.\r\n```\r\n\r\n## Additional Context\r\n(Please add any additional context or steps to reproduce the error here)\n```\n\n\n### Other\n\n![Screenshot 2024-10-12 at 12-30-54 ComfyUI](https://github.com/user-attachments/assets/f0f76743-0561-4c02-8915-43143904b5b3)\r\n![Screenshot 2024-10-12 at 12-29-58 ComfyUI](https://github.com/user-attachments/assets/91e3539f-aa8b-4e68-bd58-4c4894345ce3)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "Shakker-Labs", "pro": "FLUX.1-dev-ControlNet-Union-Pro"}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["FLUX.1-dev-ControlNet-Union-Pro"]}} +{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "494cfe5444598f22eced91b6f4bfffc24c4af339", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/96", "iss_label": "", "title": "Feature Request: model and output path setting", "body": "Sym linking is not ideal, setting a model folder is pretty standard these days and most of us use more than one software that uses models. \r\nThe same for choosing where to put the output images, personally mine go to a portable drive, not sure how to do that with ComfyUI.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '494cfe5444598f22eced91b6f4bfffc24c4af339', 'files': [{'path': 'extra_model_paths.yaml.example', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["extra_model_paths.yaml.example"], "asset": []}} +{"organization": "comfyanonymous", "repo_name": "ComfyUI", "base_commit": "f18ebbd31645437afaa9738fcf2b5ed8b48cb021", "iss_html_url": "https://github.com/comfyanonymous/ComfyUI/issues/6186", "iss_label": "User Support\nCustom Nodes Bug", "title": "error", "body": "### Your question\n\n[Errno 2] No such file or directory: 'D:\\\\ComfyUI_windows_portable_nvidia\\\\ComfyUI_windows_portable\\\\ComfyUI\\\\custom_nodes\\\\comfyui_controlnet_aux\\\\ckpts\\\\LiheYoung\\\\Depth-Anything\\\\.cache\\\\huggingface\\\\download\\\\checkpoints\\\\depth_anything_vitl14.pth.6c6a383e33e51c5fdfbf31e7ebcda943973a9e6a1cbef1564afe58d7f2e8fe63.incomplete' is:issue \n\n### Logs\n\n```powershell\n.\n```\n\n\n### Other\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".cache"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [".cache"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "e9df345a7853c52bfe98830bd2c9a07aaa7b81d9", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/159", "iss_label": "", "title": "Raspberry Pi Memory Error", "body": "* face_recognition version: 02.1\r\n* Python version: 2.7\r\n* Operating System: Raspian\r\n\r\n### Description\r\n\r\nI installed to face_recognition my raspberry pi successfully for python 3. Now i am trying to install for Python2 because i need it. When i was trying install i am taking a Memory Error. I attached the images from my error. Please help me \r\n\r\n![20170821_190454](https://user-images.githubusercontent.com/23421095/29530146-1e98e7be-86ab-11e7-91ea-e17c02170f63.jpg)\r\n![20170821_190501](https://user-images.githubusercontent.com/23421095/29530148-2113ac22-86ab-11e7-934d-e2062359f51a.jpg)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e9df345a7853c52bfe98830bd2c9a07aaa7b81d9', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "0961fd1aaf97336e544421318fcd4b55feeb1a79", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/533", "iss_label": "", "title": "knn neighbors name list?", "body": "In **face_recognition_knn.py**\r\nI want name list of 5 neighbors. So I change n_neighbors=5.\r\n`closest_distances = knn_clf.kneighbors(faces_encodings, n_neighbors=5)`\r\nAnd it returned just 5 values of **distance_threshold** from trained .clf file\r\n\r\nI found that `knn_clf.predict(faces_encodings)` return only 1 best match name.\r\n\r\nHow can I get the name list of all that 5 people?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "scikit-learn"}, {"pro": "scikit-learn", "path": ["sklearn/neighbors/_classification.py"]}], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["sklearn/neighbors/_classification.py"], "doc": [], "test": [], "config": [], "asset": ["scikit-learn"]}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "f21631401119e4af2e919dd662c3817b2c480c75", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/135", "iss_label": "", "title": "face_recognization with python", "body": "* face_recognition version:\r\n* Python version:3.5\r\n* Operating System:windows\r\n\r\n### Description\r\n\r\n I am working with some python face reorganization code in which I want to compare sampleface.jpg which contains a sample face with facegrid.jpg. The facegrid.jpg itself has some 6 faces in it. I am getting results as true for every face while I should be getting only one. The code is below. \r\n\r\n```python\r\nimport face_recognition\r\nimage = face_recognition.load_image_file(\"faceGrid.jpg\")\r\nsample_image = face_recognition.load_image_file(\"sampleface.jpg\")\r\n\r\nsample_face_encoding = face_recognition.face_encodings(sample_image)\r\n\r\nface_locations = face_recognition.face_locations(image)\r\n\r\nprint (len(face_locations), \" Faces\")\r\n\r\nfor face_location in face_locations:\r\n top, right, bottom, left = face_location\r\n face_image = image[top:bottom, left:right]\r\n face_encodings = face_recognition.face_encodings(face_image)[0]\r\n if face_recognition.compare_faces(sample_face_encoding,face_encodings)[0]:\r\n print (\"Found!\")\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f21631401119e4af2e919dd662c3817b2c480c75', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "cea177b75f74fe4e8ce73cf33da2e7e38e658ba4", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/726", "iss_label": "", "title": "cv2.imshow error", "body": "Hi All,\r\n\r\nwith the help of docs i am trying to display image with below code and getting error. i tried all possible ways like file extension, path and python version to resolve this error and not able to rectify. So, please do needful,\r\n\r\nNote:- 1.image present in the path. \r\n 2. print statement result None as output.\r\n 3. i am using python 3.6 & opencv-python-4.0.0.21\r\n\r\nimport numpy\r\nimport cv2\r\n\r\nimg = cv2.imread('C:\\\\Users\\\\Public\\\\Pictures\\\\Sample Pictures\\\\Penguins.jpeg',0) # to read an image\r\n\r\ncv2.imshow('image',img) # to display image\r\ncv2.waitKey(0)\r\ncv2.destroyAllWindows()\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/rrmamidi/Desktop/old Desktop/compress_1/python/basic python scripts/about camera_opencv_cv2/about_img_read.py\", line 11, in <module>\r\n cv2.imshow('image',img) # to display image\r\ncv2.error: OpenCV(4.0.0) C:\\projects\\opencv-python\\opencv\\modules\\highgui\\src\\window.cpp:350: error: (-215:Assertion failed) size.width>0 && size.height>0 in function 'cv::imshow'\r\n\r\nThanks,\r\nRaja", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [12], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "b8fed6f3c0ad5ab2dab72d6251c60843cad71386", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/643", "iss_label": "", "title": "Train model with more than 1 image per person", "body": "* face_recognition version: 1.2.3\r\n* Python version: 2.7.15\r\n* Operating System: Windows 10\r\n\r\n### Description\r\n\r\nI Would like to train the model with more than 1 image per each person to achieve better recognition results. Is it possible?\r\n\r\nOne more question is what does [0] mean here:\r\n```\r\nknown_face_encoding_user = face_recognition.face_encodings('image.jpg')[0]\r\n```\r\nIf I put [1] here I receive \"IndexError: list index out of range\" error.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b8fed6f3c0ad5ab2dab72d6251c60843cad71386', 'files': [{'path': 'examples/face_recognition_knn.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["examples/face_recognition_knn.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "aff06e965e895d8a6e781710e7c44c894e3011a3", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/68", "iss_label": "", "title": "cv2.error: /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp:3229: error: (-215) ssize.area() > 0 in function resize", "body": "* face_recognition version:\r\n* Python version: 3.4\r\n* Operating System: Jesse Raspbian\r\n\r\n### Description\r\n\r\nWhenever I try to run facerec_from_webcam_faster.py, I get the error below. Note that I have checked my local files, the image to be recognized is place appropriately. \r\n\r\n### \r\n\r\n\r\n```\r\nOpenCV Error: Assertion failed (ssize.area() > 0) in resize, file /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp, line 3229\r\nTraceback (most recent call last):\r\n File \"pj_webcam.py\", line 31, in <module>\r\n small_frame = cv2.resize(frame, (1, 1), fx=0.01, fy=0.01)\r\ncv2.error: /home/pi/opencv-3.1.0/modules/imgproc/src/imgwarp.cpp:3229: error: (-215) ssize.area() > 0 in function resize\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'aff06e965e895d8a6e781710e7c44c894e3011a3', 'files': [{'path': 'examples/facerec_from_webcam_faster.py', 'Loc': {'(None, None, None)': {'mod': [14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/facerec_from_webcam_faster.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "6da4a2ff0f0183280cdc2bffa58ddae8bf93ac41", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/181", "iss_label": "", "title": "does load_image_file have a version which read from byte[] not just from the disk file", "body": "does load_image_file have a version which read from byte array in memory not just from the disk file.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6da4a2ff0f0183280cdc2bffa58ddae8bf93ac41', 'files': [{'path': 'face_recognition/api.py', 'Loc': {\"(None, 'load_image_file', 73)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["face_recognition/api.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "5f804870c14803c2664942c958f11112276a79cc", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/209", "iss_label": "", "title": "face_locations get wrong result but dlib is correct", "body": "* face_recognition version: 1.0.0\r\n* Python version: 3.5\r\n* Operating System: Ubuntu 16.04 LTS\r\n\r\n### Description\r\nI run the example find_faces_in_picture_cnn.py to process the image from this link.\r\nhttps://timgsa.baidu.com/timg?image&quality=80&size=b9999_10000&sec=1507896274082&di=824f7f59943a71e2e9904d22175ce92c&imgtype=0&src=http%3A%2F%2Fwww.moontalk.com.tw%2Fupload%2Fimages%2F20160606angelina-03.jpg\r\nThe program detect the hand as a face ,I check the code and run example in dlib from this link ,the result is correct.\r\nhttp://dlib.net/cnn_face_detector.py.html\r\nSo the problem maybe occur in load_image_file ?\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5f804870c14803c2664942c958f11112276a79cc', 'files': [{'path': 'examples/find_faces_in_picture_cnn.py', 'Loc': {'(None, None, None)': {'mod': [12]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/find_faces_in_picture_cnn.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "a96484edc270697c666c1c32ba5163cf8e71b467", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/1004", "iss_label": "", "title": "IndexError: list index out of range while attempting to automatically recognize faces ", "body": "* face_recognition version: 1.2.3\r\n* Python version: 3.7.3\r\n* Operating System: Windows 10 x64\r\n\r\n### Description\r\n\r\nHello everyone,\r\nI was attempting to modify facerec_from_video_file.py in order to make it save the unknown faces in the video and recognize them based on the first frame they appear on for example if an unknown face appears on the frame 14 it should be recognized as \"new 14\" but i keep getting the error \"IndexError: list index out of range\" when a new face appears.\r\nSo here is my code and the traceback\r\n\r\n### What I Did\r\n\r\n```\r\nimport face_recognition\r\nimport cv2\r\n\r\ninput_movie = cv2.VideoCapture(\"video.mp4\")\r\nlength = int(input_movie.get(cv2.CAP_PROP_FRAME_COUNT))\r\n\r\n# Create an output movie file (make sure resolution/frame rate matches input video!)\r\nfourcc = cv2.VideoWriter_fourcc(*'XVID')\r\noutput_movie = cv2.VideoWriter('output.avi', fourcc, 29.97, (640, 360))\r\n\r\n\r\nnewimage = face_recognition.load_image_file(\"anchor.png\")\r\nnew_face_encoding = face_recognition.face_encodings(newimage)[0]\r\n\r\nknown_faces = [\r\n new_face_encoding,\r\n \r\n]\r\n\r\n# Initialize some variables\r\nface_locations = []\r\nface_encodings = []\r\nface_names = []\r\nframe_number = 0\r\n\r\n\r\ndef recog(frame_number, known_faces, face_names):\r\n toenc = []\r\n \r\n torec = face_recognition.load_image_file(r\"New\\Unknown%s.jpg\" %str(frame_number))\r\n \r\n #if not len(torec):\r\n # print(\"cannot find image\")\r\n #torec = face_recognition.load_image_file(r\"New\\Unknown%s.jpg\" %str(frame_number))\r\n toenc.append((face_recognition.face_encodings(torec))[0])\r\n if not len(toenc):\r\n print(\"can't be encoded\")\r\n known_faces.append(toenc.pop())\r\n face_names.append(\"new %s\" %str(frame_number)) \r\n \r\n# Load some sample pictures and learn how to recognize them.\r\n\r\nwhile True:\r\n # Grab a single frame of video\r\n ret, frame = input_movie.read()\r\n frame_number += 1\r\n\r\n # Quit when the input video file ends\r\n if not ret:\r\n break\r\n\r\n # Convert the image from BGR color (which OpenCV uses) to RGB color (which face_recognition uses)\r\n rgb_frame = frame[:, :, ::-1]\r\n\r\n # Find all the faces and face encodings in the current frame of video\r\n face_locations = face_recognition.face_locations(rgb_frame)\r\n face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)\r\n\r\n #face_names = []\r\n for face_encoding in face_encodings:\r\n # See if the face is a match for the known face(s)\r\n match = face_recognition.compare_faces(known_faces, face_encoding)\r\n \r\n \r\n # If you had more than 2 faces, you could make this logic a lot prettier\r\n # but I kept it simple for the demo\r\n name = \"Unknown\"\r\n \r\n face_names.append(name)\r\n\r\n # Label the results\r\n for (top, right, bottom, left), name in zip(face_locations, face_names):\r\n if not name:\r\n continue\r\n\r\n # Draw a box around the face\r\n unface = cv2.rectangle(frame, (left, top), (right, bottom), (0, 0, 255), 2)\r\n if name == \"Unknown\":\r\n res = frame[top:bottom, left:right]\r\n cv2.imwrite(r\"New\\Unknown%s.jpg\" %str(frame_number), res)\r\n recog(frame_number, known_faces, face_names)\r\n \r\n cv2.rectangle(frame, (left, bottom - 25), (right, bottom), (0, 0, 255), cv2.FILLED)\r\n font = cv2.FONT_HERSHEY_DUPLEX\r\n cv2.putText(frame, name, (left + 6, bottom - 6), font, 0.5, (255, 255, 255), 1)\r\n \r\n # Write the resulting image to the output video file\r\n print(\"Processing frame {} / {}\".format(frame_number, length))\r\n #output_movie.write(frame)\r\n cv2.imshow(\"frame\", frame)\r\n if( cv2.waitKey(27) & 0xFF == ord('q')):\r\n break\r\n\r\n# All done!\r\ninput_movie.release()\r\ncv2.destroyAllWindows()\r\n\r\n```\r\n### Output\r\n```\r\nIn [1]: runfile('D:/project_new/facerec_from_video_file.py', wdir='D:/project_new')\r\nProcessing frame 1 / 3291\r\nProcessing frame 2 / 3291\r\nProcessing frame 3 / 3291\r\nProcessing frame 4 / 3291\r\nProcessing frame 5 / 3291\r\nProcessing frame 6 / 3291\r\nProcessing frame 7 / 3291\r\nProcessing frame 8 / 3291\r\nProcessing frame 9 / 3291\r\nProcessing frame 10 / 3291\r\nProcessing frame 11 / 3291\r\nProcessing frame 12 / 3291\r\nTraceback (most recent call last):\r\n\r\n File \"<ipython-input-1-4b2c69ca71f8>\", line 1, in <module>\r\n runfile('D:/project_new/facerec_from_video_file.py', wdir='D:/project_new')\r\n\r\n File \"C:\\Users\\saber\\Anaconda3\\lib\\site-packages\\spyder_kernels\\customize\\spydercustomize.py\", line 827, in runfile\r\n execfile(filename, namespace)\r\n\r\n File \"C:\\Users\\saber\\Anaconda3\\lib\\site-packages\\spyder_kernels\\customize\\spydercustomize.py\", line 110, in execfile\r\n exec(compile(f.read(), filename, 'exec'), namespace)\r\n\r\n File \"D:/project_new/facerec_from_video_file.py\", line 81, in <module>\r\n recog(frame_number, known_faces, face_names)\r\n\r\n File \"D:/project_new/facerec_from_video_file.py\", line 35, in recog\r\n toenc.append((face_recognition.face_encodings(torec))[0])\r\n\r\nIndexError: list index out of range\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a96484edc270697c666c1c32ba5163cf8e71b467', 'files': [{'path': 'examples/facerec_from_video_file.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["examples/facerec_from_video_file.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "a8830627e89bcfb9c9dda2c8f7cac5d2e5cfb6c0", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/178", "iss_label": "", "title": "IndexError: list index out of range", "body": "IndexError: list index out of range\r\n\r\nmy code:\r\n\r\nimport face_recognition\r\nknown_image = face_recognition.load_image_file(\"D:/1.jpg\")\r\nunknown_image = face_recognition.load_image_file(\"D:/2.jpg\")\r\nbiden_encoding = face_recognition.face_encodings(known_image)[0]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [8], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ageitgey", "repo_name": "face_recognition", "base_commit": "7f183afd9c848f05830c145890c04181dcc1c46b", "iss_html_url": "https://github.com/ageitgey/face_recognition/issues/93", "iss_label": "", "title": "how to do live face recognition with RPi", "body": "* Operating System: Debian\r\n\r\n### Description\r\n\r\ni want to use the example ```facerec_from_webcam_faster.py``` \r\nbut i don't know how to change the video_output source to the PiCam\r\n\r\n### What I Did\r\n\r\n```\r\ncamera = picamera.PiCamera()\r\ncamera.resolution = (320, 240)\r\noutput = np.empty((240, 320, 3), dtype=np.uint8)\r\n\r\n\r\nwhile True:\r\n # Grab a single frame of video\r\n ret, frame = camera.capture(output, format=\"rgb\")\r\n```\r\nbut i got erros, so how can i use the picam as source?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [18], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "14318e392fbe2d69511441edf5a172c4c72d6961", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/7095", "iss_label": "status/close", "title": "\u6587\u672c\u68c0\u6d4b\u5b8c\u7684\u56fe\u7247\u600e\u4e48\u8fdb\u884c\u6587\u672c\u8bc6\u522b\u554a\uff1f", "body": "\u662f\u8981\u628a\u8fb9\u754c\u6846\u6846\u51fa\u7684\u56fe\u7247\u526a\u88c1\u4e0b\u6765\uff0c\u9001\u8fdb\u8bc6\u522b\u6a21\u578b\u5417\uff1f\u5173\u4e8e\u8fd9\u4e2a\u7684\u4ee3\u7801\u5728\u54ea\u91cc\u554a", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '14318e392fbe2d69511441edf5a172c4c72d6961', 'files': [{'path': 'tools/infer/predict_system.py', 'Loc': {\"('TextSystem', '__call__', 67)\": {'mod': [69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/infer/predict_system.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "db60893201ad07a8c20d938a8224799f932779ad", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5641", "iss_label": "inference and deployment", "title": "PaddleServing\u600e\u6837\u4fee\u6539\u76f8\u5173\u53c2\u6570", "body": "\u6839\u636e [**\u57fa\u4e8ePaddleServing\u7684\u670d\u52a1\u90e8\u7f72**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/deploy/pdserving/README_CN.md) \u540e\uff0c\u600e\u6837\u5bf9\u6a21\u578b\u53ca\u670d\u52a1\u7684\u4e00\u4e9b\u53c2\u6570\u8fdb\u884c\u4fee\u6539\u5462\uff1f\r\n\u4f8b\u5982\u5982\u4e0b\u53c2\u6570\uff1a\r\nuse_tensorrt\r\nbatch_size\r\ndet_limit_side_len\r\nbatch_num\r\ntotal_process_num\r\n...\r\n\r\n\u7591\u60d1\uff1a\r\n1\u3001[**PaddleHub Serving\u90e8\u7f72**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/deploy/hubserving/readme.md)\uff0c\u652f\u6301\u4e00\u4e9b\u53c2\u6570\u4fee\u6539\r\n2\u3001[**\u57fa\u4e8ePython\u5f15\u64ce\u7684PP-OCR\u6a21\u578b\u5e93\u63a8\u7406**](https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.4/doc/doc_ch/inference_ppocr.md)\uff0c\u4e5f\u652f\u6301\u53c2\u6570\u4fee\u6539\r\n\r\n\u4e0a\u9762\u5217\u4e3e\u7684\u51e0\u4e2a\u53c2\u6570\u90fd\u6781\u5176\u91cd\u8981\uff0c\u4f46\u662fPaddleServing\u65b9\u6cd5\u5374\u4e0d\u652f\u6301\uff0c\u8bf7\u6307\u793a\uff01\u662f\u5426\u662f\u54ea\u91cc\u53ef\u4ee5\u8bbe\u7f6e\u800c\u6211\u6ca1\u627e\u5230", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'db60893201ad07a8c20d938a8224799f932779ad', 'files': [{'path': 'deploy/pdserving/web_service.py', 'Loc': {\"('DetOp', 'init_op', 30)\": {'mod': [31]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["deploy/pdserving/web_service.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "0afe6c3262babda2012074110520fe9d1a3c63c0", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/2405", "iss_label": "status/close", "title": "\u8f7b\u91cf\u6a21\u578b\u7684\u63a8\u65ad\u4e2d\uff0c\u6bcf\u9694\u51e0\u884c\u5c31\u4f1a\u51fa\u73b0\u4e00\u884c\u8bc6\u522b\u4e3a\u4e71\u7801", "body": "![image](https://user-images.githubusercontent.com/62594309/113710560-73095c00-9716-11eb-828d-40026f37715e.png)\r\n\u5c31\u50cf\u8fd9\u91cc\u84dd\u8272\u5708\u8d77\u6765\u7684\u8fd9\u884c\r\n\r\n\u4f46\u662f\u901a\u7528\u6a21\u578b\u5c31\u6ca1\u6709\u8fd9\u4e2a\u95ee\u9898\r\n\u8fd9\u662f\u4ec0\u4e48\u539f\u56e0\u5f15\u8d77\u7684\u5462\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0afe6c3262babda2012074110520fe9d1a3c63c0', 'files': [{'path': 'deploy/hubserving/readme_en.md', 'Loc': {'(None, None, 192)': {'mod': [192]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code\nDoc\nHow to modify own code"}, "loctype": {"code": [], "doc": ["deploy/hubserving/readme_en.md"], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "64edd41c277c60c672388be6d5764be85c1de43a", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5427", "iss_label": "status/close\nstale", "title": "rknn\u4e0d\u652f\u6301DepthwiseSeparable\u6a21\u5757\u4e2d\u7684ConvBNLayer\u5c42\u53c2\u6570stride(p1, p2) p1\u4e0ep2\u4e0d\u4e00\u81f4\u7b97\u5b50", "body": "rknn\u4e0d\u652f\u6301DepthwiseSeparable\u6a21\u5757\u4e2d\u7684ConvBNLayer\u5c42\u53c2\u6570stride(p1, p2) p1\u4e0ep2\u4e0d\u4e00\u81f4\u7b97\u5b50\uff0c\u8fd9\u6837\u6d89\u53ca\u5230\u4fee\u6539\u7f51\u7edc\u7ed3\u6784\uff0c\u6211\u770b\u4e86\u4e0bstride(p1, p2)\u4e2dp1\u4e0ep2\u4e0d\u4e00\u81f4\u7684\u60c5\u51b5\u662f\u4e3a\u4e86\u505a\u4e0b\u91c7\u6837\u7684\u64cd\u4f5c\uff0c\u8bf7\u95ee\u6211\u60f3\u4fdd\u6301p1\u4e0ep2\u76f8\u7b49\u7684\u60c5\u51b5\u4e0b\uff0c\u8be5\u5982\u4f55\u4fee\u6539DepthwiseSeparable\u6a21\u5757\u6216\u8005\u66f4\u4e0a\u5c42\u6a21\u5757\u7684\u53c2\u6570\u5462\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '64edd41c277c60c672388be6d5764be85c1de43a', 'files': [{'path': 'ppocr/modeling/backbones/rec_mobilenet_v3.py', 'Loc': {\"('MobileNetV3', '__init__', 23)\": {'mod': [48]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ppocr/modeling/backbones/rec_mobilenet_v3.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "2e352dcc06ba86159099ec6a2928c7ce556a7245", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/7542", "iss_label": "status/close", "title": "PaddleOCR\u52a0\u8f7d\u81ea\u5df1\u7684\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u56fe\u50cf\u68c0\u6d4b+\u8bc6\u522b\u4e0e\u4ec5\u4f7f\u7528\u8bc6\u522b\u6a21\u578b\u65f6\u6548\u679c\u4e0d\u4e00\u81f4", "body": "\u5148\u7528PaddleOCR\u7684\u56fe\u50cf\u68c0\u6d4b\u529f\u80fd\uff0c\u6309\u7167\u5f97\u5230\u7684\u8bc6\u522b\u6846\u5e26\u6587\u5b57\u7684\u5c0f\u56fe\u88c1\u526a\u51fa\u6765\uff0c\u6807\u6ce8\u7528\u505a\u8bad\u7ec3\u96c6\uff0c\u5bf9\u6587\u5b57\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u4e86\u8bad\u7ec3\uff0c\u7136\u540e\u63a8\u7406\u6d4b\u8bd5\u4e86\u4e00\u4e0b\u6ca1\u6709\u95ee\u9898\uff0c\u4e8e\u662f\u4f7f\u7528PaddleOCR\u52a0\u8f7d\u65b0\u8bad\u7ec3\u7684\u6587\u5b57\u8bc6\u522b\u6a21\u578b\u8dd1\u68c0\u6d4b + \u8bc6\u522b\u7684\u6574\u4f53\u6d41\u7a0b\uff0c\u7ed3\u679c\u53d1\u73b0\u51fa\u73b0\u4e86\u8bc6\u522b\u7ed3\u679c\u4e0d\u4e00\u81f4\u7684\u60c5\u51b5\u3002\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aCenOS7\r\n- \u7248\u672c\u53f7/Version\uff1aPaddle\uff1a2.3.1.post112 PaddleOCR\uff1a2.6 \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1aPaddleOCR\r\n- python/Version: 3.9.12\r\n- \u4f7f\u7528\u6a21\u578bppocrv3\r\n\r\n\u95ee\u9898\u56fe\u7247\uff1a\r\n![image](https://user-images.githubusercontent.com/34825635/189260271-ee896330-02ea-4290-a6da-a8b16a644be2.png)\r\n\r\n* \u5355\u7528\u8bc6\u522b\u6a21\u578b\u8fdb\u884c\u63a8\u7406\u65f6\uff1a\uff08\u6709\u654f\u611f\u4fe1\u606f\u6b64\u5904\u6211\u906e\u6321\u4e86\uff09\r\n`\u524d\u8a00\uff0d\u5ba2\u6237\uff08\u201c\u7532\u65b9\u201d\uff09\u548cXXXXX\uff08\u201c\u4e59\u65b9\u201d\uff09\u6240\u7b7e\u8ba2\u7684\u4e1a\u52a1\u7ea6\u5b9a\u4e66\uff08\u201c\u4e1a\u52a1\u7ea6\u5b9a\u4e66\u201d\uff09\u53ca\u672c\u4e1a\u52a1\u6761\u6b3e\u5176\u540c\u6784\u6210`\r\n* \u4f7f\u7528PaddleOCR\u65f6\uff1a\r\n`\uff08\uff0c\uff09\uff087\uff0c\uff09\u662f\u65f6\uff08\uff0c\uff09\uff0d`\r\n\r\n- \u63a8\u7406\u547d\u4ee4\uff1a\r\n```\r\npython3 tools/infer/predict_rec.py --image_dir=/home/hr/projects/ppocr/PaddleOCR/data/train_data/rec/train/XXXXX.png --rec_model_dir=/home/hr/projects/ppocr/PaddleOCR/output/inference/rec_ppocr_v3_distillation/Teacher --rec_image_shape=\"3, 48, 640\" --rec_char_dict_path=/home/hr/projects/ppocr/PaddleOCR/ppocr/utils/ppocr_keys_v1.txt\r\n```\r\n- \u914d\u7f6e\u6587\u4ef6\u7684\u53c2\u6570\uff1a\r\n```\r\n# \u5bf9image_shape\u8fdb\u884c\u4e86\u66f4\u6539\r\nimage_shape: [3, 48, 640]\r\n```\r\n- PaddleOCR\u7684\u52a0\u8f7d\u53c2\u6570\u8bbe\u7f6e\uff1a\r\n```\r\npaddle_ocr_engine = PaddleOCR(\r\n use_angle_cls=True, \r\n lang=\"ch\", \r\n rec_model_dir=\"./output/inference/rec_ppocr_v3_distillation/Teacher\",\r\n rec_image_shape=\"3, 48, 640\",\r\n rec_char_dict_path=\"./ppocr/utils/ppocr_keys_v1.txt\") \r\n```\r\n\r\n\u5982\u679c\u80fd\u591f\u63d0\u4f9b\u4e00\u4e9b\u5e2e\u52a9\u6216\u8005\u5efa\u8bae\uff0c\u975e\u5e38\u611f\u8c22\uff01", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2e352dcc06ba86159099ec6a2928c7ce556a7245', 'files': [{'path': 'paddleocr.py', 'Loc': {\"('PaddleOCR', '__init__', 445)\": {'mod': [480]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["paddleocr.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "443de01526a1c7108934990c4b646ed992f0bce8", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/5209", "iss_label": "status/close", "title": "pdserving \u6700\u540e\u600e\u4e48\u8fd4\u56de\u6587\u672c\u4ee5\u53ca\u6587\u672c\u5750\u6807", "body": "\u76ee\u524dpdserving \u53ea\u8fd4\u56de\u4e86 \u6587\u672c\u6ca1\u6709\u8fd4\u56de\u6587\u672c\u5750\u6807\uff0c\u8bf7\u95ee\u5982\u4f55\u8fd4\u56de\u6587\u672c\u5750\u6807\u5462", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '443de01526a1c7108934990c4b646ed992f0bce8', 'files': [{'path': 'deploy/pdserving/ocr_reader.py', 'Loc': {\"('OCRReader', 'postprocess', 425)\": {'mod': []}}, 'status': 'modified'}, {'path': 'deploy/pdserving/web_service.py', 'Loc': {\"('DetOp', 'postprocess', 57)\": {'mod': [63]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["deploy/pdserving/web_service.py", "deploy/pdserving/ocr_reader.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "ab16f2e4f9a4eac2eeb5f0324ab950b2215780d0", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3735", "iss_label": "", "title": "\u505a\u6570\u5b57\u8bad\u7ec3\u7684\u56fe\u50cf\u3002\u5728\u628a\u68c0\u6d4b\u548c\u8bc6\u522b\u4e32\u8d77\u6765\u7684\u65f6\u5019\uff0c\u8bc6\u522b\u51fa\u6765\u7684\u4e3a\u4ec0\u4e48\u662f\u4e2d\u6587\uff1f", "body": "\u81ea\u5df1\u8bad\u7ec3\u6570\u5b57\u6a21\u578b\uff0c\u7528\u5230\u68c0\u6d4b\u548c\u8bc6\u522b\uff0c\u5728\u8f6cinference\u6a21\u578b\u524d\uff0c\u8bc6\u522b\u7684\u662f\u6570\u5b57\u3002\u4f46\u5c06\u68c0\u6d4b\u548c\u8bc6\u522b\u4e32\u8054\u7684\u65f6\u5019\uff0c\u6309\u7167\u5b98\u65b9\u6559\u7a0b\uff0c\u8f6c\u6362\u6210inference\u6a21\u578b\uff0c\u4e3a\u4ec0\u4e48\u8bc6\u522b\u51fa\u6765\u7684\u662f\u4e2d\u6587\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ab16f2e4f9a4eac2eeb5f0324ab950b2215780d0', 'files': [{'path': 'configs/det/det_mv3_db.yml', 'Loc': {'(None, None, 116)': {'mod': [116]}}, 'status': 'modified'}, {'path': 'tools/infer/predict_det.py', 'Loc': {\"('TextDetector', '__init__', 38)\": {'mod': [42]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/infer/predict_det.py"], "doc": [], "test": [], "config": ["configs/det/det_mv3_db.yml"], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "efc01375c942d87dc1e20856c7159096db16a9ab", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/11715", "iss_label": "", "title": "Can ch_PP-OCRv4_rec_server_infer's support for english be put into the documentation?", "body": "I notice if I am calling\r\n\r\n```\r\nfrom paddleocr import PaddleOCR\r\nocr = Paddle.OCR(\r\ndet_model_dir=ch_PP-OCRv4_det_server_infer,\r\nrec_model_dir=ch_PP-OCRv4_rec_infer\r\nlang='en')\r\n...\r\nresult = ocr.ocr(my_image)\r\n```\r\nthis works fine. However, If i set the rec model to the server version as well (`ch_PP-OCRv4_rec_server_infer`), then I get the following error:\r\n\r\n```\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/paddleocr.py\", line 661, in ocr\r\n dt_boxes, rec_res, _ = self.__call__(img, cls)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/tools/infer/predict_system.py\", line 105, in __call__\r\n rec_res, elapse = self.text_recognizer(img_crop_list)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/tools/infer/predict_rec.py\", line 628, in __call__\r\n rec_result = self.postprocess_op(preds)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 121, in __call__\r\n text = self.decode(preds_idx, preds_prob, is_remove_duplicate=True)\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 83, in decode\r\n char_list = [\r\n File \"/opt/conda/lib/python3.10/site-packages/paddleocr/ppocr/postprocess/rec_postprocess.py\", line 84, in <listcomp>\r\n self.character[text_id]\r\nIndexError: list index out of range\r\n```\r\n\r\nWhich I'm guessing is because it's trying to output Chinese, which has an 8000 character dict, whereas English only has 90 or so. Because it says english is supported by the server model (https://github.com/PaddlePaddle/PaddleOCR/blob/release/2.7/doc/doc_ch/models_list.md), is it possible to get the ppocrv4 server model to output english successfully? \r\n<img width=\"1274\" alt=\"Screen Shot 2024-03-11 at 10 12 15 PM\" src=\"https://github.com/PaddlePaddle/PaddleOCR/assets/21298347/f0b204ea-c7d3-4368-a939-4c9f99b111fb\">\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'efc01375c942d87dc1e20856c7159096db16a9ab', 'files': [{'path': 'paddleocr.py', 'Loc': {'(None, None, None)': {'mod': [76, 80]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["paddleocr.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "9d44728da81e7d56ea5f437845d8d48bc278b086", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3248", "iss_label": "", "title": "\u68c0\u6d4b\u548c\u8bc6\u522b\u600e\u4e48\u8fde\u63a5", "body": "\u60f3\u7528\u8f7b\u91cf\u5316\u7684\u68c0\u6d4b\u6a21\u578b\u914d\u5408RCNN\u8bc6\u522b\uff0c\u4e0d\u77e5\u9053\u600e\u4e48\u5c06\u4e24\u4e2a\u9636\u6bb5\u8fde\u63a5\u5728\u4e00\u8d77\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9d44728da81e7d56ea5f437845d8d48bc278b086', 'files': [{'path': 'doc/doc_ch/inference.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["doc/doc_ch/inference.md"], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "582e868cf84fca911e195596053f503f890b561b", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/8641", "iss_label": "status/close", "title": "\u8bf7\u5236\u4f5cPP-Structure\u7684PaddleServing\u4f8b\u5b50\u5427", "body": "\u8981\u5199PP-Structure\u5728paddle_serving_server.web_service\u4e2d\u7684Op\u7c7b\uff0c\u611f\u89c9\u6211\u8fd9\u4e2a\u65b0\u624b\u505a\u4e0d\u5230\u554a\u3002\r\n\u6709\u6ca1\u6709\u5927\u795e\u505a\u597d\u4f8b\u5b50\uff0c\u8ba9\u65b0\u624b\u590d\u7528\u5462", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '582e868cf84fca911e195596053f503f890b561b', 'files': [{'path': 'deploy/hubserving/readme.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["deploy/hubserving/readme.md"], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "35449b5c7440f7706e5a4558e5b3efeb76944432", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3844", "iss_label": "", "title": "HOW TO RESUME TRAINING FROM LAST CHECKPOINT?", "body": "Hi,\r\nI have been training a model on my own dataset, How I can resume the training from last checkpoint saved? And also when I train the model does it save Best weights automatically to some path or we need to provide some argument to do it. \r\nPlease help me on this.\r\n\r\nThanks..", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '35449b5c7440f7706e5a4558e5b3efeb76944432', 'files': [{'path': 'tools/program.py', 'Loc': {\"('ArgsParser', '__init__', 39)\": {'mod': [42, 42]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["tools/program.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "adba814904eb4f0aeeec186f158cfb6c212a6e26", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/3942", "iss_label": "", "title": "\u6a21\u578b\u5e93404", "body": "ch_ppocr_mobile_slim_v2.1_det \u63a8\u7406\u6a21\u578b\r\nch_ppocr_mobile_v2.1_det \u63a8\u7406\u548c\u8bad\u7ec3\u6a21\u578b\r\n\u4e0a\u9762\u7684\u5230\u76ee\u524d\u662f404\u72b6\u6001\uff0c\u65e0\u6cd5\u4e0b\u8f7d", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'adba814904eb4f0aeeec186f158cfb6c212a6e26', 'files': [{'path': 'doc/doc_ch/models_list.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["doc/doc_ch/models_list.md"], "test": [], "config": [], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "c167df2f60d08085167cdc9431101f4b45a8a019", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/6838", "iss_label": "status/close", "title": "Mac M1 Pro can't install paddleOCR2.0.1~2.5.0.3, but I can install paddleOCR 1.1.1 and run successful.", "body": "\u8bf7\u63d0\u4f9b\u4e0b\u8ff0\u5b8c\u6574\u4fe1\u606f\u4ee5\u4fbf\u5feb\u901f\u5b9a\u4f4d\u95ee\u9898/Please provide the following information to quickly locate the problem\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aMacBook\u00a0Pro\uff0814\u82f1\u5bf8\uff0c2021\u5e74\uff09\uff0cApple M1 Pro 16 GB\uff0c\r\n- \u7248\u672c\u53f7/Version\uff1aPycharm2022.1.2 and Anaconda create Python 3.8 environment.\r\n- Paddle\uff1a Monterey 12.3\r\n- PaddleOCR\uff1a2.0.1~2.5.0.3\r\n- \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1aPaddleOCR\u3001Numpy\r\n- \u8fd0\u884c\u6307\u4ee4/Command Code\uff1a\r\n\r\n1. python3 -m pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple \uff08\u8fd0\u884c\u6b63\u5e38\uff0cRun OK !\uff09\r\n2. pip install \"paddleocr>=2.0.1\"\uff08\u8fd0\u884c\u5931\u8d25\uff0c\u62a5\u9519\uff0cToo much ERROR\uff01\uff09\uff08\u5982\u679c\u6211\u4e0d\u6307\u5b9apaddleOCR1\u7684\u7248\u672c\u53f7\uff0c\u4f1a\u81ea\u52a8\u5b89\u88c5paddleOCR 1.1.1\uff0c\u5e76\u4e14\u53ef\u4ee5\u6b63\u5e38\u8fd0\u884c\u4f7f\u7528\uff0c\u53732.0.1\u7248\u672c\u5f00\u59cb\u5168\u90e8\u5b89\u88c5\u5931\u8d25\uff09\r\n\r\n- \u5b8c\u6574\u62a5\u9519/Complete Error Message\uff08\u8be6\u89c1markdown\u6587\u6863\uff0c\u592a\u957f\u4e86\uff0c\u8fd9\u91cc\u4f20\u4e0d\u4e0a\u6765\uff09\r\n[\u3010Error Log\u3011Mac M1 Pro can't install paddleOCR2.0.1~2.5.0.3.md](https://github.com/PaddlePaddle/PaddleOCR/files/9075892/Error.Log.Mac.M1.Pro.can.t.install.paddleOCR2.0.1.2.5.0.3.md)\r\n\uff1a\r\n- `ERROR: Cannot install paddleocr==2.0.1, paddleocr==2.0.2, paddleocr==2.0.3, paddleocr==2.0.4, paddleocr==2.0.5, paddleocr==2.0.6, paddleocr==2.2, paddleocr==2.2.0.1, paddleocr==2.2.0.2, paddleocr==2.3, paddleocr==2.3.0.1, paddleocr==2.3.0.2, paddleocr==2.4, paddleocr==2.4.0.1, paddleocr==2.4.0.2, paddleocr==2.4.0.3, paddleocr==2.4.0.4, paddleocr==2.5, paddleocr==2.5.0.2 and paddleocr==2.5.0.3 because these package versions have conflicting dependencies.\r\n\r\nThe conflict is caused by:\r\n paddleocr 2.5.0.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.5.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.5 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.4 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4.0.1 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.4 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3.0.1 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.3 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.2.0.2 depends on opencv-contrib-python==4.4.0.46\r\n paddleocr 2.2.0.1 depends on opencv-contrib-python==4.2.0.32\r\n paddleocr 2.2 depends on opencv-contrib-python==4.2.0.32\r\n paddleocr 2.0.6 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.5 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.4 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.3 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.2 depends on opencv-python==4.2.0.32\r\n paddleocr 2.0.1 depends on opencv-python==4.2.0.32\r\n\r\nTo fix this you could try to:\r\n\r\n1. loosen the range of package versions you've specified\r\n2. remove package versions to allow pip attempt to solve the dependency conflict\r\n\r\nERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/user_guide/#fixing-conflicting-dependencies`\r\n<img width=\"1544\" alt=\"image\" src=\"https://user-images.githubusercontent.com/29346824/178091955-5d71f63b-6bd5-477e-88e4-cb29cb161124.png\">\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c167df2f60d08085167cdc9431101f4b45a8a019', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 10)': {'mod': [10]}}, 'status': 'modified'}, {'path': 'setup.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["setup.py"], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "PaddlePaddle", "repo_name": "PaddleOCR", "base_commit": "e44c2af7622c97d3faecd37b062e7f1cb922fd40", "iss_html_url": "https://github.com/PaddlePaddle/PaddleOCR/issues/10298", "iss_label": "status/close", "title": "train warning", "body": "\u8bf7\u63d0\u4f9b\u4e0b\u8ff0\u5b8c\u6574\u4fe1\u606f\u4ee5\u4fbf\u5feb\u901f\u5b9a\u4f4d\u95ee\u9898/Please provide the following information to quickly locate the problem\r\n\r\n- \u7cfb\u7edf\u73af\u5883/System Environment\uff1aubantu \r\n- \u7248\u672c\u53f7/Version\uff1aPaddle\uff1a PaddleOCR\uff1a \u95ee\u9898\u76f8\u5173\u7ec4\u4ef6/Related components\uff1apaddle develop 0.0.0.post116\r\n\r\n- \u8fd0\u884c\u6307\u4ee4/Command Code\uff1a\r\n- \u5b8c\u6574\u62a5\u9519/Complete Error Message\uff1a\r\n\u4e00\u76f4\u6709\u597d\u591a\u8fd9\u79cd\u8b66\u544a\r\nI0705 11:55:13.443581 28582 eager_method.cc:140] Warning:: 0D Tensor cannot be used as 'Tensor.numpy()[0]' . In order to avoid this problem, 0D Tensor will be changed to 1D numpy currently, but it's not correct and will be removed in release 2.6. For Tensor contain only one element, Please modify 'Tensor.numpy()[0]' to 'float(Tensor)' as soon as possible, otherwise 'Tensor.numpy()[0]' will raise error in release 2.6.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e44c2af7622c97d3faecd37b062e7f1cb922fd40', 'files': [{'path': 'tools/program.py', 'Loc': {\"(None, 'train', 176)\": {'mod': [349]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["tools/program.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "dc866c91b9191bce083ec908c5665b7f2f40bd17", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/201", "iss_label": "", "title": "gpt 3", "body": "hi\r\ncan we use gpt 3 api free key ?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dc866c91b9191bce083ec908c5665b7f2f40bd17', 'files': [{'path': 'scripts/rerun_edited_message_logs.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scripts/rerun_edited_message_logs.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "5505ec41dd49eb1e86aa405335f40d7a8fa20b0a", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/497", "iss_label": "", "title": "main.py is missing?", "body": "main.py is missing?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5505ec41dd49eb1e86aa405335f40d7a8fa20b0a', 'files': [{'path': 'gpt_engineer/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5\n\u8be2\u95ee\u6587\u4ef6\u7684\u4f4d\u7f6e", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gpt_engineer/"]}} +{"organization": "AntonOsika", "repo_name": "gpt-engineer", "base_commit": "a55265ddb46462548a842dae914bb5fcb22181fa", "iss_html_url": "https://github.com/AntonOsika/gpt-engineer/issues/509", "iss_label": "", "title": "Error with Promtfile", "body": "When I try to run the example file I get this error even though there is something in the prompt file, which you can see from the screenshots is in the example folder. Does anyone know how I can solve this problem?\r\n\r\n![Screenshot_Error](https://github.com/AntonOsika/gpt-engineer/assets/62028361/cf8c7992-eca9-4bed-b258-bc1bf279082b)\r\n\r\n![Screenshot_of_promt](https://github.com/AntonOsika/gpt-engineer/assets/62028361/a3d573b1-b9da-4201-9980-709c543dadde)\r\n\r\n![image](https://github.com/AntonOsika/gpt-engineer/assets/62028361/dd8edc0d-3248-4d1c-9813-c388f4b81fb5)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a55265ddb46462548a842dae914bb5fcb22181fa', 'files': [{'path': 'projects/example/prompt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["projects/example/prompt"]}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "cca0ca704a713ab153938e78de6787609c723cad", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1147", "iss_label": "", "title": "urllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party..", "body": "Hello Guys.\r\nThis is the error I'm getting when I am trying to use the image prompt issue\r\n\r\nurllib.error.URLError: <urlopen error [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond>\r\nTotal time: 21.12 seconds\r\n\r\nDo you happen to know what could be the problem?\r\n\r\nthanks in advance!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'cca0ca704a713ab153938e78de6787609c723cad', 'files': [{'path': 'troubleshoot.md', 'Loc': {'(None, None, 43)': {'mod': [43]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "lllyasviel", "pro": "misc", "path": ["ip-adapter-plus-face_sdxl_vit-h.bin"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["troubleshoot.md"], "test": [], "config": [], "asset": ["ip-adapter-plus-face_sdxl_vit-h.bin"]}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "fc3588875759328d715fa07cc58178211a894386", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/602", "iss_label": "", "title": "[BUG]Memory Issue when generating images for the second time", "body": "When I generate images first time with one image prompt, everything works fine.\r\nHowever, at the second generation, the GPU memory run out.\r\n\r\nHere is the error\r\n\r\n`Preparation time: 19.46 seconds\r\nloading new\r\nERROR diffusion_model.output_blocks.0.1.transformer_blocks.4.ff.net.0.proj.weight CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 22.41 GiB total capacity; 21.52 GiB already allocated; 11.69 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nERROR diffusion_model.output_blocks.0.1.transformer_blocks.5.attn1.to_v.weight CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 22.41 GiB total capacity; 21.56 GiB already allocated; 11.69 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nERROR diffusion_model.output_blocks.0.1.transformer_blocks.5.attn1.to_out.0.weight CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 22.41 GiB total capacity; 21.56 GiB already allocated; 11.69 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nTraceback (most recent call last):\r\n File \"D:\\Repos\\Fooocus\\modules\\async_worker.py\", line 551, in worker\r\n handler(task)\r\n File \"D:\\Repos\\Fooocus\\venv\\Lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\venv\\Lib\\site-packages\\torch\\utils\\_contextlib.py\", line 115, in decorate_context\r\n return func(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\modules\\async_worker.py\", line 460, in handler\r\n comfy.model_management.load_models_gpu([pipeline.final_unet])\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_management.py\", line 397, in load_models_gpu\r\n cur_loaded_model = loaded_model.model_load(lowvram_model_memory)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_management.py\", line 286, in model_load\r\n raise e\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_management.py\", line 282, in model_load\r\n self.real_model = self.model.patch_model(device_to=patch_model_to) #TODO: do something with loras and offloading to CPU\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_patcher.py\", line 161, in patch_model\r\n temp_weight = comfy.model_management.cast_to_device(weight, device_to, torch.float32, copy=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\Repos\\Fooocus\\repositories\\ComfyUI-from-StabilityAI-Official\\comfy\\model_management.py\", line 498, in cast_to_device\r\n return tensor.to(device, copy=copy).to(dtype)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 50.00 MiB (GPU 0; 22.41 GiB total capacity; 21.56 GiB already allocated; 11.69 MiB free; 22.21 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nTotal time: 209.24 seconds\r\nKeyboard interruption in main thread... closing server.\r\n\r\nProcess finished with exit code -1\r\n`\r\nAt my second attempt to track this error, I add an endpoint. When I clicked generate and it meet the endpoint, I found 6 models have been loaded into memory. May be this is the issue?\r\n\r\n![image](https://github.com/lllyasviel/Fooocus/assets/93906313/bb31e548-3a5c-4a85-9a63-251fdf7584fb)\r\n![47760ff1859b15fa74cf9dea3aa17a5](https://github.com/lllyasviel/Fooocus/assets/93906313/314e07a3-7112-438a-a6f9-bcb24ae4e5a9)\r\n\r\nThanks for help~\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fc3588875759328d715fa07cc58178211a894386', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "f3084894402a4c0b7ed9e7164466bcedd5f5428d", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/1508", "iss_label": "", "title": "Problems with installation and correct operation.", "body": "Hello, I had problems installing Fooocus on a GNU/Linux system, many errors occurred during the installation and they were all different. I was not able to capture some of them, but in general terms the errors were as follows: \"could not find versions of python packages that satisfy dependencies (error during installation)\",\"(when clicking the \"generate\" button) \"nvidia drivers were not available found, make sure you have them installed \"link to official website\".\r\n\r\nI managed to save the output of the following errors:\r\n\r\n\r\nERROR: Could not find a version that satisfies the requirement accelerate==0.21.0 (from -r requirements_versions.txt (line 5)) (from versions: 0.0.1, 0.1.0, 0.2.0, 0.2.1, 0.3.0, 0.4.0, 0.5.0, 0.5.1, 0.6.0, 0.6.1, 0.6.2, 0.7.0, 0.7.1, 0.8.0, 0.9.0, 0.10.0, 0.11.0, 0.12.0, 0.13.0, 0.13.1, 0.13.2, 0.14.0, 0.15.0, 0.16.0, 0.17.0, 0.17.1, 0.18.0, 0.19.0, 0.20.0, 0.20.1, 0.20.2, 0.20.3)\r\nERROR: No matching distribution found for accelerate==0.21.0 (from -r requirements_versions.txt (line 5))\r\n\r\n\r\n\r\n\r\n\r\npython entry_with_update.py\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py']\r\nPython 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0]\r\nFooocus version: 2.1.853\r\nRunning on local URL: http://127.0.0.1:7865\r\n\r\nTo create a public link, set share=True in launch().\r\nTotal VRAM 12006 MB, total RAM 31850 MB\r\nxformers version: 0.0.16\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/Fooocus/ldm_patched/modules/model_management.py\", line 222, in <module>\r\n import accelerate\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/__init__.py\", line 3, in <module>\r\n from .accelerator import Accelerator\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/accelerator.py\", line 35, in <module>\r\n from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/checkpointing.py\", line 24, in <module>\r\n from .utils import (\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/__init__.py\", line 131, in <module>\r\n from .bnb import has_4bit_bnb_layers, load_and_quantize_model\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/bnb.py\", line 42, in <module>\r\n import bitsandbytes as bnb\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/__init__.py\", line 6, in <module>\r\n from .autograd._functions import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py\", line 5, in <module>\r\n import bitsandbytes.functional as F\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/functional.py\", line 13, in <module>\r\n from .cextension import COMPILED_WITH_CUDA, lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 113, in <module>\r\n lib = CUDASetup.get_instance().lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 109, in get_instance\r\n cls._instance.initialize()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 59, in initialize\r\n binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 125, in evaluate_cuda_setup\r\n cuda_version_string = get_cuda_version(cuda, cudart_path)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 45, in get_cuda_version\r\n check_cuda_result(cuda, cudart.cudaRuntimeGetVersion(ctypes.byref(version)))\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 387, in getattr\r\n func = self.getitem(name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 392, in getitem\r\n func = self._FuncPtr((name_or_ordinal, self))\r\nAttributeError: python: undefined symbol: cudaRuntimeGetVersion\r\n\r\nERROR: LOW VRAM MODE NEEDS accelerate.\r\nSet vram state to: NORMAL_VRAM\r\nAlways offload VRAM\r\nDevice: cuda:0 NVIDIA GeForce RTX 4070 : \r\nVAE dtype: torch.float32\r\nUsing xformers cross attention\r\nException in thread Thread-2 (worker):\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1086, in _get_module\r\n return importlib.import_module(\".\" + module_name, self.name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/importlib/__init__.py\", line 126, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1050, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 1027, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 688, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 883, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 241, in _call_with_frames_removed\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/models/clip/modeling_clip.py\", line 27, in <module>\r\n from ...modeling_utils import PreTrainedModel\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/modeling_utils.py\", line 85, in <module>\r\n from accelerate import version as accelerate_version\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/__init__.py\", line 3, in <module>\r\n from .accelerator import Accelerator\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/accelerator.py\", line 35, in <module>\r\n from .checkpointing import load_accelerator_state, load_custom_state, save_accelerator_state, save_custom_state\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/checkpointing.py\", line 24, in <module>\r\n from .utils import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/__init__.py\", line 131, in <module>\r\n from .bnb import has_4bit_bnb_layers, load_and_quantize_model\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/accelerate/utils/bnb.py\", line 42, in <module>\r\n import bitsandbytes as bnb\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/__init__.py\", line 6, in <module>\r\n from .autograd._functions import (\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n\r\nFile \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/autograd/_functions.py\", line 5, in <module>\r\n import bitsandbytes.functional as F\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/functional.py\", line 13, in <module>\r\n from .cextension import COMPILED_WITH_CUDA, lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 113, in <module>\r\n lib = CUDASetup.get_instance().lib\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 109, in get_instance\r\n cls._instance.initialize()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cextension.py\", line 59, in initialize\r\n binary_name, cudart_path, cuda, cc, cuda_version_string = evaluate_cuda_setup()\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 125, in evaluate_cuda_setup\r\n cuda_version_string = get_cuda_version(cuda, cudart_path)\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/bitsandbytes/cuda_setup/main.py\", line 45, in get_cuda_version\r\n check_cuda_result(cuda, cudart.cudaRuntimeGetVersion(ctypes.byref(version)))\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 387, in getattr\r\n func = self.getitem(name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/ctypes/__init__.py\", line 392, in getitem\r\n func = self._FuncPtr((name_or_ordinal, self))\r\nAttributeError: python: undefined symbol: cudaRuntimeGetVersion\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/threading.py\", line 1016, in _bootstrap_inner\r\n self.run()\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/threading.py\", line 953, in run\r\n self._target(*self._args, **self._kwargs)\r\n File \"/home/dragon_flow/Fooocus/modules/async_worker.py\", line 25, in worker\r\n import modules.default_pipeline as pipeline\r\n File \"/home/dragon_flow/Fooocus/modules/default_pipeline.py\", line 1, in <module>\r\n import modules.core as core\r\n File \"/home/dragon_flow/Fooocus/modules/core.py\", line 1, in <module>\r\n from modules.patch import patch_all\r\n File \"/home/dragon_flow/Fooocus/modules/patch.py\", line 29, in <module>\r\n from modules.patch_clip import patch_all_clip\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"/home/dragon_flow/Fooocus/modules/patch_clip.py\", line 23, in <module>\r\n from transformers import CLIPTextModel, CLIPTextConfig, modeling_utils, CLIPVisionConfig, CLIPVisionModelWithProjection\r\n File \"/home/dragon_flow/.local/lib/python3.10/site-packages/shiboken2/files.dir/shibokensupport/__feature__.py\", line 142, in _import\r\n return original_import(name, *args, **kwargs)\r\n File \"<frozen importlib._bootstrap>\", line 1075, in _handle_fromlist\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1077, in getattr\r\n value = getattr(module, name)\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1076, in getattr\r\n module = self._get_module(self._class_to_module[name])\r\n File \"/home/dragon_flow/anaconda3/envs/fooocus/lib/python3.10/site-packages/transformers/utils/import_utils.py\", line 1088, in _get_module\r\n raise RuntimeError(\r\n\r\nRuntimeError: Failed to import transformers.models.clip.modeling_clip because of the following error (look up to see its traceback):\r\npython: undefined symbol: cudaRuntimeGetVersion\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f3084894402a4c0b7ed9e7164466bcedd5f5428d', 'files': [{'path': 'requirements_versions.txt', 'Loc': {'(None, None, 5)': {'mod': [5]}}, 'status': 'modified'}, {'path': 'readme.md', 'Loc': {'(None, None, 152)': {'mod': [152]}}, 'status': 'modified'}, {'path': 'troubleshoot.md', 'Loc': {'(None, None, 107)': {'mod': [107]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["readme.md", "troubleshoot.md"], "test": [], "config": ["requirements_versions.txt"], "asset": []}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "225947ac1a603124b0274da3e94d2c6cba65f732", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/500", "iss_label": "", "title": "is this a local model or not", "body": "is this a local model or not\r\n\r\ni dont get how it could show someone elses promts if its local", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '225947ac1a603124b0274da3e94d2c6cba65f732', 'files': [{'path': 'models/checkpoints', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["models/checkpoints"]}} +{"organization": "lllyasviel", "repo_name": "Fooocus", "base_commit": "d7439b2d6004d50a0fda19108603a8d1941a185e", "iss_html_url": "https://github.com/lllyasviel/Fooocus/issues/3689", "iss_label": "bug\ntriage", "title": "[Bug]: Exits upon attempting to load a model on Windows", "body": "### Checklist\n\n- [X] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)\n- [X] The issue exists on a clean installation of Fooocus\n- [X] The issue exists in the current version of Fooocus\n- [X] The issue has not been reported before recently\n- [ ] The issue has been reported before but has not been fixed yet\n\n### What happened?\n\nAttempting to run Fooocus on Windows 11 (and possibly 10, haven't tested) simply exits when attempting to load the default model, no error or nothing.\n\n### Steps to reproduce the problem\n\n1. Install Fooocus on Windows 11 with a NVIDIA GPU\r\n2. Attempt to run it.\n\n### What should have happened?\n\nIt should've loaded the model successfully.\n\n### What browsers do you use to access Fooocus?\n\nMozilla Firefox\n\n### Where are you running Fooocus?\n\nLocally\n\n### What operating system are you using?\n\nWindows 11 (23H2)\n\n### Console logs\n\n```Shell\n(fooocus_env) D:\\Misc4\\Fooocus>python entry_with_update.py\r\nAlready up-to-date\r\nUpdate succeeded.\r\n[System ARGV] ['entry_with_update.py']\r\nPython 3.12.7 (tags/v3.12.7:0b05ead, Oct 1 2024, 03:06:41) [MSC v.1941 64 bit (AMD64)]\r\nFooocus version: 2.5.5\r\n[Cleanup] Attempting to delete content of temp dir C:\\Users\\hkcu\\AppData\\Local\\Temp\\fooocus\r\n[Cleanup] Cleanup successful\r\nTotal VRAM 12281 MB, total RAM 16317 MB\r\nSet vram state to: NORMAL_VRAM\r\nAlways offload VRAM\r\nDevice: cuda:0 NVIDIA GeForce RTX 4070 : native\r\nVAE dtype: torch.bfloat16\r\nUsing pytorch cross attention\r\nRefiner unloaded.\r\nIMPORTANT: You are using gradio version 3.41.2, however version 4.44.1 is available, please upgrade.\r\n--------\r\nRunning on local URL: http://127.0.0.1:7865\r\n\r\nTo create a public link, set `share=True` in `launch()`.\r\n\r\n(fooocus_env) D:\\Misc4\\Fooocus>\n```\n\n\n### Additional information\n\nUsing Fooocus on the exact same machine, with the exact same amount of swap configured (4Gb) works as normal.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd7439b2d6004d50a0fda19108603a8d1941a185e', 'files': [{'path': 'presets/default.json', 'Loc': {'(None, None, 2)': {'mod': [2]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": ["config.txt", "config_modification_tutorial.txt"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\n1", "info_type": "Config\n"}, "loctype": {"code": ["presets/default.json"], "doc": [], "test": [], "config": ["config.txt", "config_modification_tutorial.txt"], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6383113e8527e1c73049e26d2b3482a1b0f54b30", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/376", "iss_label": "", "title": "\u5173\u4e8epublic url", "body": "![Screenshot 2023-04-08 170556](https://user-images.githubusercontent.com/78332286/230713429-e0cc9a3f-1da9-4e76-b24a-67c35624a866.png)\r\n\r\n\u8fd9\u4e2apublic url \u662f\u7ecf\u8fc7\u535a\u4e3b\u81ea\u5df1\u642d\u5efa\u7684\u670d\u52a1\u5668\u7684\u5417\uff1f\u6211\u672c\u5730\u642d\u5efa\u4e4b\u540e\u5728\u624b\u673a\u6253\u5f00\u8fd9\u4e2aurl\u4e5f\u80fd\u7528", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6383113e8527e1c73049e26d2b3482a1b0f54b30', 'files': [{'path': 'main.py', 'Loc': {'(None, None, None)': {'mod': [174]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["main.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6c13bb7b46519312222f9afacedaa16225b673a9", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1545", "iss_label": "ToDo", "title": "[Bug]: Qwen1.5-14B-chat \u8fd0\u884c\u4e0d\u4e86", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOneKeyInstall (\u4e00\u952e\u5b89\u88c5\u811a\u672c-windows)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\nTraceback (most recent call last):\r\n File \".\\request_llms\\local_llm_class.py\", line 158, in run\r\n for response_full in self.llm_stream_generator(**kwargs):\r\n File \".\\request_llms\\bridge_qwen_local.py\", line 46, in llm_stream_generator\r\n for response in self._model.chat_stream(self._tokenizer, query, history=history):\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\anaconda3\\envs\\GPT_academic371\\Lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1688, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'Qwen2ForCausalLM' object has no attribute 'chat_stream'\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\nTraceback (most recent call last):\r\n File \".\\request_llms\\local_llm_class.py\", line 158, in run\r\n for response_full in self.llm_stream_generator(**kwargs):\r\n File \".\\request_llms\\bridge_qwen_local.py\", line 46, in llm_stream_generator\r\n for response in self._model.chat_stream(self._tokenizer, query, history=history):\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"D:\\anaconda3\\envs\\GPT_academic371\\Lib\\site-packages\\torch\\nn\\modules\\module.py\", line 1688, in __getattr__\r\n raise AttributeError(f\"'{type(self).__name__}' object has no attribute '{name}'\")\r\nAttributeError: 'Qwen2ForCausalLM' object has no attribute 'chat_stream'\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6c13bb7b46519312222f9afacedaa16225b673a9', 'files': [{'path': 'request_llms/bridge_qwen_local.py', 'Loc': {\"('GetQwenLMHandle', 'llm_stream_generator', 34)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_qwen_local.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "dd7a01cda53628ea07ef6192bf257f9ad51f5f47", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/978", "iss_label": "", "title": "[Bug]: \u4ee3\u7406\u914d\u7f6e\u6210\u529f\uff0c\u4ee3\u7406\u6240\u5728\u5730\u67e5\u8be2\u8d85\u65f6\uff0c\u4ee3\u7406\u53ef\u80fd\u65e0\u6548", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nDocker\uff08Windows/Mac\uff09\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6309\u7167\u8981\u6c42\u4fee\u6539\u4ee3\u7406\u914d\u7f6e\u6587\u4ef6`config.py`\uff0c\u57fa\u4e8e`Dockerfile`\u6784\u5efa\u4e4b\u540e\u8fd0\u884c\u51fa\u73b0\uff0c`\u4ee3\u7406\u914d\u7f6e\u6210\u529f\uff0c\u4ee3\u7406\u6240\u5728\u5730\u67e5\u8be2\u8d85\u65f6\uff0c\u4ee3\u7406\u53ef\u80fd\u65e0\u6548`\u7684\u8b66\u544a\u26a0\ufe0f\uff0c\u5b9e\u9645\u8fd0\u884c\u62a5\u9519`ConnectionRefusedError: [Errno 111] Connection refused`\uff0c\u8bf7\u5e2e\u5e2e\u6211\u54ea\u91cc\u914d\u7f6e\u53ef\u80fd\u6709\u8bef\r\nps.\u4ee3\u7406\u670d\u52a1\u5730\u5740\u7aef\u53e3\u914d\u7f6e\u6b63\u786e\uff0c\u4e14\u8fd0\u884c\u6b63\u5e38\uff0c\u53ef\u4ee5\u8bbf\u95ee\u5916\u7f51\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n<img width=\"921\" alt=\"\u622a\u5c4f2023-07-21 21 12 53\" src=\"https://github.com/binary-husky/gpt_academic/assets/97352201/5f54b0b4-a515-4ae6-8360-1b1504683688\">\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dd7a01cda53628ea07ef6192bf257f9ad51f5f47', 'files': [{'path': 'check_proxy.py', 'Loc': {\"(None, 'check_proxy', 2)\": {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["check_proxy.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "ea4e03b1d892d462f71bab76ee0bec65d541f6b7", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1286", "iss_label": "", "title": "[Feature]: \u8bf7\u95ee\u662f\u5426\u6210\u529f\u4fee\u6539 api2d-gpt-3.5-turbo-16k \u7cfb\u5217\u6a21\u578b max_token \u4e3a 16385 ", "body": "### Class | \u7c7b\u578b\n\n\u5927\u8bed\u8a00\u6a21\u578b\n\n### Feature Request | \u529f\u80fd\u8bf7\u6c42\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ea4e03b1d892d462f71bab76ee0bec65d541f6b7', 'files': [{'path': 'request_llms/bridge_all.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["request_llms/bridge_all.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "526b4d8ecd1adbdcf97946b3bca4c89feda6ec04", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/850", "iss_label": "cause of issue is clear", "title": "[Bug]: Json\u5f02\u5e38 \u201cerror\u201d:", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\nTraceback (most recent call last):\r\n File \"./request_llm/bridge_chatgpt.py\", line 189, in predict\r\n if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0][\"delta\"]) == 0):\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n\r\nJson\u5f02\u5e38 \u201cerror\u201d: { \u201cmessage\u201d: \u201c\u201d, \u201ctype\u201d: \u201cinvalid_request_error\u201d, \u201cparam\u201d: null, \u201ccode\u201d: \u201cinvalid_api_key\u201d }}\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n<img width=\"1341\" alt=\"image\" src=\"https://github.com/binary-husky/gpt_academic/assets/125801419/c448d538-e762-4bbe-b76a-05d921c34ded\">\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\ngpt-3.5-turbo : 0 : 1 ..........\r\nTraceback (most recent call last):\r\n File \"/Users/zihengli/chatgpt_academic/request_llm/bridge_chatgpt.py\", line 189, in predict\r\n if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0][\"delta\"]) == 0):\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/Users/zihengli/anaconda3/envs/gptac_venv/lib/python3.11/json/decoder.py\", line 355, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '526b4d8ecd1adbdcf97946b3bca4c89feda6ec04', 'files': [{'path': 'config.py', 'Loc': {'(None, None, None)': {'mod': [1]}}, 'status': 'modified'}, {'path': 'README.md', 'Loc': {'(None, None, 101)': {'mod': [101]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["config.py"], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "fdffbee1b02bd515ceb4519ae2a830a547b695b4", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1137", "iss_label": "", "title": "[Bug]: Connection errored out", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nLinux\n\n### Describe the bug | \u7b80\u8ff0\n\n\u4f60\u597d, \u7248\u672c3.54\r\n\u90e8\u7f72\u5728vps\u4e0a, os\u662fubuntu 20.04\r\n\u6302\u5728\u4e86\u516c\u7f51, \u6b64\u524d\u5747\u53ef\u6b63\u5e38\u4f7f\u7528\r\n\u4f46\u662f\u7a81\u7136\u51fa\u73b0\u4e86\u8fd9\u6837\u7684\u95ee\u9898, \u5982\u4e0b\u56fe\r\n\r\n\u8bf7\u95ee\u8fd9\u662f\u4ec0\u4e48\u539f\u56e0\u5462? \u662f\u8be5vps\u7684ip\u4e0d\u884c, \u88abopenai ban\u4e86\u4e48? \u8fd8\u662f\u4ec0\u4e48\u522b\u7684\u539f\u56e0, \u8c22\u8c22\r\n\r\n\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![Snipaste_2023-09-30_15-01-00](https://github.com/binary-husky/gpt_academic/assets/59535777/9567364a-6bff-4878-b92a-94087a02c655)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fdffbee1b02bd515ceb4519ae2a830a547b695b4', 'files': [{'path': 'main.py', 'Loc': {\"(None, 'main', 3)\": {'mod': [287]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": ["nginx.conf"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0\n2\uff1f", "info_type": "Config"}, "loctype": {"code": ["main.py", "nginx.conf"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "a2002ebd85f441b3cd563bae28e9966006068ad6", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/462", "iss_label": "", "title": "ERROR: Invalid requirement: '__pycache__/' (from line 2 of requirements.txt)", "body": "**Describe the bug \u7b80\u8ff0**\r\nERROR: Invalid requirement: '__pycache__/' (from line 2 of requirements.txt)\r\n**Screen Shot \u622a\u56fe**\r\n![image](https://user-images.githubusercontent.com/46212839/231796758-e537f323-bb03-4fb1-97c8-3b80fddc8476.png)\r\n\r\n![image](https://user-images.githubusercontent.com/46212839/231796688-14d0eb47-8ea7-4d73-9ccd-259b1b10f5df.png)\r\n\r\n**Terminal Traceback \u7ec8\u7aeftraceback\uff08\u5982\u679c\u6709\uff09**\r\n\r\n\r\nBefore submitting an issue \u63d0\u4ea4issue\u4e4b\u524d\uff1a\r\n- Please try to upgrade your code. \u5982\u679c\u60a8\u7684\u4ee3\u7801\u4e0d\u662f\u6700\u65b0\u7684\uff0c\u5efa\u8bae\u60a8\u5148\u5c1d\u8bd5\u66f4\u65b0\u4ee3\u7801\r\n- Please check project wiki for common problem solutions.\u9879\u76ee[wiki](https://github.com/binary-husky/chatgpt_academic/wiki)\u6709\u4e00\u4e9b\u5e38\u89c1\u95ee\u9898\u7684\u89e3\u51b3\u65b9\u6cd5\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a2002ebd85f441b3cd563bae28e9966006068ad6', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "0485d01d67d6a41bb0810d6112f40602af1167a9", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/476", "iss_label": "cause of issue is clear", "title": "\u4e0a\u4f20\u6587\u4ef6\u65f6\u91cd\u590d\u4e0a\u4f20", "body": "\r\n\u4e0a\u4f20\u6587\u4ef6\u65f6\u91cd\u590d\u4e0a\u4f20\r\n\u6837\u4f8b\u6587\u4ef6[1.docx](https://github.com/binary-husky/chatgpt_academic/files/11230280/1.docx)\r\n\u754c\u9762![TE$JF@(Q$565$CWJ4)9(A(P](https://user-images.githubusercontent.com/51219393/231979388-e73140de-f563-40c6-9e97-7f0148505cec.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0485d01d67d6a41bb0810d6112f40602af1167a9', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 1)': {'mod': [1]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e594e1b928aadb36d291184bca1deee8601621a8", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1489", "iss_label": "", "title": "[Bug]: \u867d\u7136PDF\u751f\u6210\u5931\u8d25\u4e86, \u4f46\u8bf7\u67e5\u6536\u7ed3\u679c\uff08\u538b\u7f29\u5305\uff09, \u5185\u542b\u5df2\u7ecf\u7ffb\u8bd1\u7684Tex\u6587\u6863, \u60a8\u53ef\u4ee5\u5230Github Issue\u533a, \u7528\u8be5\u538b\u7f29\u5305\u8fdb\u884c\u53cd\u9988\u3002\u5982\u7cfb\u7edf\u662fLinux\uff0c\u8bf7\u68c0\u67e5\u7cfb\u7edf\u5b57\u4f53\uff08\u89c1Github wiki\uff09 ...", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nAnaconda (I used latest requirements.txt)\n\n### Version | \u7248\u672c\n\nLatest | \u6700\u65b0\u7248\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nWindows\n\n### Describe the bug | \u7b80\u8ff0\n\n\u7531\u4e8e\u6700\u4e3a\u5173\u952e\u7684\u8f6c\u5316PDF\u7f16\u8bd1\u5931\u8d25, \u5c06\u6839\u636e\u62a5\u9519\u4fe1\u606f\u4fee\u6b63tex\u6e90\u6587\u4ef6\u5e76\u91cd\u8bd5, \u5f53\u524d\u62a5\u9519\u7684latex\u4ee3\u7801\u5904\u4e8e\u7b2c[-1]\u884c ...\r\n\r\n\u867d\u7136PDF\u751f\u6210\u5931\u8d25\u4e86, \u4f46\u8bf7\u67e5\u6536\u7ed3\u679c\uff08\u538b\u7f29\u5305\uff09, \u5185\u542b\u5df2\u7ecf\u7ffb\u8bd1\u7684Tex\u6587\u6863, \u60a8\u53ef\u4ee5\u5230Github Issue\u533a, \u7528\u8be5\u538b\u7f29\u5305\u8fdb\u884c\u53cd\u9988\u3002\u5982\u7cfb\u7edf\u662fLinux\uff0c\u8bf7\u68c0\u67e5\u7cfb\u7edf\u5b57\u4f53\uff08\u89c1Github wiki\uff09 ...\r\n\r\n\u62a5\u544a\u5df2\u7ecf\u6dfb\u52a0\u5230\u53f3\u4fa7\u201c\u6587\u4ef6\u4e0a\u4f20\u533a\u201d\uff08\u53ef\u80fd\u5904\u4e8e\u6298\u53e0\u72b6\u6001\uff09\uff0c\u8bf7\u67e5\u6536\u3002\r\n[gpt_log\\default_user\\shared\\2024-01-18-14-25-51-result.zip](http://localhost:50649/file=C:/Users/admin/gpt_academic/gpt_log/default_user/shared/2024-01-18-14-25-51-result.zip)\r\n[gpt_log\\default_user\\shared\\2024-01-18-14-25-41.trans.html](http://localhost:50649/file=C:/Users/admin/gpt_academic/gpt_log/default_user/shared/2024-01-18-14-25-41.trans.html)\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![image](https://github.com/binary-husky/gpt_academic/assets/102421741/01fc2c02-ea15-4717-af77-e89797e407d1)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n[2024-01-18-14-25-51-result.zip](https://github.com/binary-husky/gpt_academic/files/13973247/2024-01-18-14-25-51-result.zip)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"path": ".tex"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": [".tex"]}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "9540cf9448026a1c8135c750866b63d320909718", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/257", "iss_label": "", "title": "Something went wrong Connection errored out.", "body": "### Describe the bug\r\n\r\n\u542f\u52a8\u7a0b\u5e8f\u540e\uff0c\u80fd\u6253\u5f00\u9875\u9762\u6b63\u5e38\u663e\u793a\uff0c\u4f46\u662f\u4e0a\u4f20\u6587\u6863\u6216\u8005\u53d1\u9001\u63d0\u95ee\u6cd5\u4f1a\u51fa\u9519\u201cSomething went wrong Connection errored out.\u201d\r\n\r\n### Is there an existing issue for this?\r\n\r\n- [ ] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n\u6309\u7167\u6b63\u5e38\u6b65\u9aa4\uff1a\r\ngit clone https://github.com/binary-husky/chatgpt_academic.git\r\ncd chatgpt_academic\r\npython -m pip install -r requirements.txt \r\npython main.py\r\n\r\nconfig.py\u7684\u914d\u7f6e\u662f\uff1a\r\nUSE_PROXY = True\r\n\r\n### Screenshot\r\n\r\n<img width=\"1400\" alt=\"image\" src=\"https://user-images.githubusercontent.com/66538098/229296702-36166ed4-d077-4ee8-9af3-d263b3039dc5.png\">\r\n<img width=\"1320\" alt=\"image\" src=\"https://user-images.githubusercontent.com/66538098/229327959-e8d3857d-9495-4c28-8a3f-cf1a8d294248.png\">\r\n\u7ed9\u51fa\u4e86\u6b63\u786e\u7684API key\uff0c\u5374\u53d1\u73b0\u4ece\u6ca1\u4f7f\u7528\u8fc7\r\n<img width=\"809\" alt=\"image\" src=\"https://user-images.githubusercontent.com/66538098/229331202-a1850a02-d1f2-4a69-97d1-cb5e285d8e8f.png\">\r\n\r\n\r\n### Logs\r\n\r\n```shell\r\n\u63a7\u5236\u53f0\u62a5\u9519[Error] WebSocket connection to 'ws://localhost:62694/queue/join' failed: There was a bad response from the server. (x4)\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\ngradio:3.24.1\r\nProductName:macOS\r\nProductVersion:13.3\r\nBuildVersion:22E252\r\n```\r\n\r\n\r\n### Severity\r\n\r\nannoying", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "gradio-app", "pro": "gradio", "path": ["gradio/routes.py"]}], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gradio/routes.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "bfa6661367b7592e82225515e5e4845c4aad95bb", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/252", "iss_label": "", "title": "\u80fd\u4e0d\u80fd\u4f7f\u7528azure openai key?", "body": "\u4ee3\u7406\u670d\u52a1\u5668\u4e0d\u591f\u7a33\u5b9a\uff0c\u66f4\u9ebb\u70e6\u7684\u662f\u7ed9openai\u7eed\u8d39\uff0c\u8fd8\u8981\u4e2a\u7f8e\u56fd\u4fe1\u7528\u5361\r\n\r\n\u975e\u5e38\u597d\u7684\u5e94\u7528\uff0c\u5e0c\u671b\u51fa\u66f4\u591a\u7684\u63d2\u4ef6\u529f\u80fd\uff0c\u8c22\u8c22", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'bfa6661367b7592e82225515e5e4845c4aad95bb', 'files': [{'path': 'config.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["config.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "2d2e02040d7d91d2f2a4c34f4d0bf677873b5f4d", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1328", "iss_label": "", "title": "[Bug]: \u7cbe\u51c6\u7ffb\u8bd1PDF\u6587\u6863(NOUGAT)\u529f\u80fd\u51fa\u9519\uff0c", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOthers (Please Describe)\n\n### Version | \u7248\u672c\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6d4b\u8bd5\u670d\u52a1\u5668\uff0c\u7cbe\u51c6\u7ffb\u8bd1PDF\u6587\u6863(NOUGAT)\u529f\u80fd\u51fa\u9519\uff0c\u4f46\u662f\u53ef\u4ee5\u4f7f\u7528\u7cbe\u51c6\u7ffb\u8bd1PDF\u7684\u529f\u80fd\r\n\r\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/b9a0db8c-282a-4e02-a527-97fcf63eaaa0)\r\n\r\n\u62a5\u9519\u4fe1\u606f\u5982\u4e0b\r\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/b40e4d8b-ade9-4e27-86e5-75f6027fbbb0)\r\n\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![image](https://github.com/binary-husky/gpt_academic/assets/51499671/ac4995d5-0a68-433b-aa44-2b3c82bbc1e3)\r\nTraceback (most recent call last):\r\n File \"./toolbox.py\", line 159, in decorated\r\n yield from f(main_input, llm_kwargs, plugin_kwargs, chatbot_with_cookie, history, *args, **kwargs)\r\n File \"./crazy_functions/\u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863_NOUGAT.py\", line 93, in \u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863\r\n yield from \u89e3\u6790PDF_\u57fa\u4e8eNOUGAT(file_manifest, project_folder, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt)\r\n File \"./crazy_functions/\u6279\u91cf\u7ffb\u8bd1PDF\u6587\u6863_NOUGAT.py\", line 111, in \u89e3\u6790PDF_\u57fa\u4e8eNOUGAT\r\n fpp = yield from nougat_handle.NOUGAT_parse_pdf(fp, chatbot, history)\r\n File \"./crazy_functions/crazy_utils.py\", line 761, in NOUGAT_parse_pdf\r\n raise RuntimeError(\"Nougat\u89e3\u6790\u8bba\u6587\u5931\u8d25\u3002\")\r\nRuntimeError: Nougat\u89e3\u6790\u8bba\u6587\u5931\u8d25\u3002\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2d2e02040d7d91d2f2a4c34f4d0bf677873b5f4d', 'files': [{'path': 'crazy_functions/crazy_utils.py', 'Loc': {\"('nougat_interface', 'NOUGAT_parse_pdf', 739)\": {'mod': [752]}, \"('nougat_interface', None, 719)\": {'mod': [723]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["crazy_functions/crazy_utils.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "17abd29d5035b5b227deaad69d32cf437b23e542", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/94", "iss_label": "", "title": "[\u4e00\u4e9b\u5efa\u8bae]input\u6846\u8fd8\u662f\u592a\u5c0f\u4e86", "body": "RT \u591a\u884c\u8f93\u5165\u8fd8\u662f\u4e0d\u65b9\u4fbf\uff0c\u5982\u679c\u9002\u5f53\u8c03\u6574\u4f1a\u66f4\u597d\u7528\u3002\r\n\r\n\u5e0c\u671b\u91c7\u7eb3\uff0c\u611f\u8c22\u5206\u4eab\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '17abd29d5035b5b227deaad69d32cf437b23e542', 'files': [{'path': 'main.py', 'Loc': {'(None, None, None)': {'mod': [1]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["main.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "37744a9cb173477398a2609f02d5e7cef47eb677", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/1438", "iss_label": "", "title": "[Bug]: \u6d6e\u52a8\u8f93\u5165\u6846\u5728\u62d6\u81f3\u9876\u90e8\u540e\uff0c\u65e0\u6cd5\u91cd\u65b0\u79fb\u4f4d", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nOthers (Please Describe)\n\n### Version | \u7248\u672c\n\nPlease choose | \u8bf7\u9009\u62e9\n\n### OS | \u64cd\u4f5c\u7cfb\u7edf\n\nMac\n\n### Describe the bug | \u7b80\u8ff0\n\n\u6d6e\u52a8\u8f93\u5165\u6846\u5728\u62d6\u81f3\u9876\u90e8\u540e\uff0c\u65e0\u6cd5\u91cd\u65b0\u79fb\u4f4d\r\n\r\n\u671f\u671b\uff1a\u91cd\u65b0\u52fe\u9009\u540e\uff0c\u5e94\u8be5\u56de\u5230\u521d\u59cb\u4f4d\u7f6e\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![2024-01-02 14 36 52](https://github.com/binary-husky/gpt_academic/assets/46100050/86a648dc-ab38-486f-9a0b-7f71dde0bd57)\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/binary-husky/gradio-fix/commit/fb67dd12f58aa53c75a90378cddbc811ac3c01d2", "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "binary-husky", "pro": "gradio-fix", "path": ["{'base_commit': 'fb67dd12f58aa53c75a90378cddbc811ac3c01d2', 'files': [{'path': 'js/app/src/components/Floating/StaticFloating.svelte', 'status': 'modified', 'Loc': {'(None, None, 48)': {'add': [48]}}}]}"]}], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gradio-fix", "js/app/src/components/Floating/StaticFloating.svelte"]}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6538c58b8e5a4a7ae08dfa1ae9970bc422158096", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/620", "iss_label": "", "title": "\u60f3\u95ee\u95eenewbing\u7684cookies\u600e\u4e48\u586b\u5199\uff0c\u6211\u4ecejavascript:alert(document.cookie)\u627e\u5230\u4e86cookies\u4f46\u662f\u4e00\u76f4\u663e\u793acookies\u6709\u9519", "body": "![image](https://user-images.githubusercontent.com/73226302/234341095-273ea6e0-aadc-4e19-8966-05709d61f9b1.png)\r\n![image](https://user-images.githubusercontent.com/73226302/234341151-017d0634-620a-4377-b972-ddb2d7a22d2a.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6538c58b8e5a4a7ae08dfa1ae9970bc422158096', 'files': [{'path': 'config.py', 'Loc': {'(None, None, None)': {'mod': [69]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "", "info_type": "Other"}, "loctype": {"code": ["config.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6d8c8cd3f0b9d2b6fe8d412b83f902cbd43fa0bd", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/150", "iss_label": "documentation\nhigh value issue", "title": "\u6709\u6ca1\u6709\u5b8c\u5168\u90e8\u7f72\u6210\u529f\u7684\u5927\u795e\u51fa\u4e2a\u8be6\u7ec6\u7684\u90e8\u7f72\u6b65\u9aa4\u5440\uff1fWindows \u6709\u622a\u56fe\uff0c\u8dea\u6c42", "body": "Windows\u5b89\u88c5\u90e8\u7f72\r\n\u57fa\u672c\u73af\uff1a\u5b89\u88c5anaconda\r\n1.\u4e0b\u8f7d\u9879\u76ee CMD\r\n\u9009\u62e9\u8def\u5f84\r\ngit clone https://github.com/binary-husky/chatgpt_academic.git\r\ncd chatgpt_academic\r\n\u6211\u4eec\u5efa\u8bae\u5c06config.py\u590d\u5236\u4e3aconfig_private.py\u5e76\u5c06\u540e\u8005\u7528\u4f5c\u4e2a\u6027\u5316\u914d\u7f6e\u6587\u4ef6\u4ee5\u907f\u514dconfig.py\u4e2d\u7684\u53d8\u66f4\u5f71\u54cd\u4f60\u7684\u4f7f\u7528\u6216\u4e0d\u5c0f\u5fc3\u5c06\u5305\u542b\u4f60\u7684OpenAI API KEY\u7684config.py\u63d0\u4ea4\u81f3\u672c\u9879\u76ee\u3002\r\ncp config.py config_private.py\r\n2.\u521b\u5efa\u865a\u62df\u73af\u5883 python 3.11\r\nconda create -n chatgpt python=3.11.0 #\u65b0\u5efa\u73af\u5883\u3001\r\n3.\u8fdb\u5165\u9879\u76ee\u4e0b\u8f7d\u8def\u5f84\r\n\u4f8b\u5982 cd G:\\python\\Program\\chatgpt_academic\r\n4.\u542f\u52a8\u865a\u62df\u73af\u5883\r\nconda activate chatgpt\r\n5. \u5b89\u88c5 gradio>=3.23\r\n\uff081\uff09\u5230https://pypi.org/project/gradio/ \u4e0b\u8f7dwhl\u7248\u672c\r\n\uff082\uff09pip install G:\\python\\Program\\chatgpt_academic\\gradio-3.23.0-py3-none-any.whl\r\n6.\u914d\u7f6e\u5176\u4ed6\u73af\u5883\r\n\uff081\uff09\u6253\u5f00requirements.txt\uff0c\u6ce8\u91ca\u6389gradio\uff0c\u7136\u540e\u4fdd\u5b58\r\n\uff082\uff09\u8fd0\u884c python -m pip install -r requirements.txt\r\n7.\u542f\u52a8\u4ee3\u7406\r\n8. \u914d\u7f6econfig_private.py\r\n\uff081\uff09\u6dfb\u52a0API_KEY\r\n\uff082\uff09\u4fee\u6539USE_PROXY = Ture\r\n\uff083\uff09\u4fee\u6539proxies\r\n\u5728\u6d4f\u89c8\u5668\u8f93\u5165: https://ipapi.co/json/\r\n\u6d4f\u89c8\u5668\u4e0a\u53f3\u952e->\u68c0\u67e5->\u7f51\u7edc->ctrl+r\r\n\u6253\u5f00json\uff0c\u5c06\u8fdc\u7a0b\u5730\u5740\u4fee\u6539\u5230proxies = { \"http\": \"104.26.9.44:443\", \"https\": \"104.26.9.44:443\", }\r\n9.\u542f\u52a8\u7a0b\u5e8f\r\npython main.py", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6d8c8cd3f0b9d2b6fe8d412b83f902cbd43fa0bd', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\n+ \nDoc"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "e20070939c6c7eeca33a8438041c9e038836957b", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/568", "iss_label": "enhancement", "title": "\u80fd\u5426\u589e\u52a0\u804a\u5929\u5185\u5bb9\u5bfc\u51fa\u529f\u80fd\uff1f", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["gpt_log/chat_secrets.log"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gpt_log/chat_secrets.log"]}} +{"organization": "binary-husky", "repo_name": "gpt_academic", "base_commit": "6c448b9a601ba4b9cc84e8bc625a3a91b1982ba4", "iss_html_url": "https://github.com/binary-husky/gpt_academic/issues/756", "iss_label": "", "title": "[Bug]: ", "body": "### Installation Method | \u5b89\u88c5\u65b9\u6cd5\u4e0e\u5e73\u53f0\n\nPip Install (I used latest requirements.txt and python>=3.8)\n\n### Describe the bug | \u7b80\u8ff0\n\n\u53ea\u6709\u51fa\u53bb\u7684\u6d88\u606f\uff0c\u6ca1\u6709\u8fd4\u56de\u6d88\u606f\uff0c\u8bd5\u8fc7\u4e86ap2id\u548cnewbing\n\n### Screen Shot | \u6709\u5e2e\u52a9\u7684\u622a\u56fe\n\n![\u5fae\u4fe1\u622a\u56fe_20230517095200](https://github.com/binary-husky/gpt_academic/assets/43396544/32d9bc41-351b-4ceb-a7d2-99e09b21ddb5)\r\n\n\n### Terminal Traceback & Material to Help Reproduce Bugs | \u7ec8\u7aeftraceback\uff08\u5982\u6709\uff09 + \u5e2e\u52a9\u6211\u4eec\u590d\u73b0\u7684\u6d4b\u8bd5\u6750\u6599\u6837\u672c\uff08\u5982\u6709\uff09\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6c448b9a601ba4b9cc84e8bc625a3a91b1982ba4', 'files': [{'path': 'request_llms/requirements_newbing.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "\u4f9d\u8d56"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["request_llms/requirements_newbing.txt"], "asset": []}} +{"organization": "deepseek-ai", "repo_name": "DeepSeek-V3", "base_commit": "6a30b43249a5710a3adb18c11763222d3fca8756", "iss_html_url": "https://github.com/deepseek-ai/DeepSeek-V3/issues/566", "iss_label": "", "title": "Please provide the code for your model architecture.", "body": "**Is your feature request related to a problem? Please describe.**\nThis repo only provides weights. It makes it difficult to confirm claims from the article.\n\n**Describe the solution you'd like**\n A repo where the code to the model architecture is provided. \n\n**Describe alternatives you've considered**\nClearly state that the model is not open source. \n\n**Additional context**\nNone\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6a30b43249a5710a3adb18c11763222d3fca8756', 'files': [{'path': 'inference/model.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["inference/model.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepseek-ai", "repo_name": "DeepSeek-V3", "base_commit": "0d16ea24c8030a30d4fe8a75b28e05b03b4e0970", "iss_html_url": "https://github.com/deepseek-ai/DeepSeek-V3/issues/210", "iss_label": "", "title": "[BUG]convert\u540e\u8fd0\u884c\u9519\u8bef", "body": "**Describe the bug**\r\n[rank0]: ValueError: Unrecognized model in ../DV3-hf-32/. Should have a `model_type` key in its config.json, or contain one of the following strings in its name: albert, align, altclip, audio-spectrogram-transformer, autoformer, bark, bart, beit, bert, bert-generation, big_bird, bigbird_pegasus, biogpt, bit, blenderbot, blenderbot-small, blip, blip-2, bloom, bridgetower, bros, camembert, canine, chameleon, chinese_clip, chinese_clip_vision_model, clap, clip, clip_text_model, clip_vision_model, clipseg, clvp, code_llama, codegen, cohere, conditional_detr, convbert, convnext, convnextv2, cpmant, ctrl, cvt, dac, data2vec-audio, data2vec-text, data2vec-vision, dbrx, deberta, deberta-v2, decision_transformer, deformable_detr, deit, depth_anything, deta, detr, dinat, dinov2, distilbert, donut-swin, dpr, dpt, efficientformer, efficientnet, electra, encodec, encoder-decoder, ernie, ernie_m, esm, falcon, falcon_mamba, fastspeech2_conformer, flaubert, flava, fnet, focalnet, fsmt, funnel, fuyu, gemma, gemma2, git, glm, glpn, gpt-sw3, gpt2, gpt_bigcode, gpt_neo, gpt_neox, gpt_neox_japanese, gptj, gptsan-japanese, granite, granitemoe, graphormer, grounding-dino, groupvit, hiera, hubert, ibert, idefics, idefics2, idefics3, imagegpt, informer, instructblip, instructblipvideo, jamba, jetmoe, jukebox, kosmos-2, layoutlm, layoutlmv2, layoutlmv3, led, levit, lilt, llama, llava, llava_next, llava_next_video, llava_onevision, longformer, longt5, luke, lxmert, m2m_100, mamba, mamba2, marian, markuplm, mask2former, maskformer, maskformer-swin, mbart, mctct, mega, megatron-bert, mgp-str, mimi, mistral, mixtral, mllama, mobilebert, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, moshi, mpnet, mpt, mra, mt5, musicgen, musicgen_melody, mvp, nat, nemotron, nezha, nllb-moe, nougat, nystromformer, olmo, olmoe, omdet-turbo, oneformer, open-llama, openai-gpt, opt, owlv2, owlvit, paligemma, patchtsmixer, patchtst, pegasus, pegasus_x, perceiver, persimmon, phi, phi3, phimoe, pix2struct, pixtral, plbart, poolformer, pop2piano, prophetnet, pvt, pvt_v2, qdqbert, qwen2, qwen2_audio, qwen2_audio_encoder, qwen2_moe, qwen2_vl, rag, realm, recurrent_gemma, reformer, regnet, rembert, resnet, retribert, roberta, roberta-prelayernorm, roc_bert, roformer, rt_detr, rt_detr_resnet, rwkv, sam, seamless_m4t, seamless_m4t_v2, segformer, seggpt, sew, sew-d, siglip, siglip_vision_model, speech-encoder-decoder, speech_to_text, speech_to_text_2, speecht5, splinter, squeezebert, stablelm, starcoder2, superpoint, swiftformer, swin, swin2sr, swinv2, switch_transformers, t5, table-transformer, tapas, time_series_transformer, timesformer, timm_backbone, trajectory_transformer, transfo-xl, trocr, tvlt, tvp, udop, umt5, unispeech, unispeech-sat, univnet, upernet, van, video_llava, videomae, vilt, vipllava, vision-encoder-decoder, vision-text-dual-encoder, visual_bert, vit, vit_hybrid, vit_mae, vit_msn, vitdet, vitmatte, vits, vivit, wav2vec2, wav2vec2-bert, wav2vec2-conformer, wavlm, whisper, xclip, xglm, xlm, xlm-prophetnet, xlm-roberta, xlm-roberta-xl, xlnet, xmod, yolos, yoso, zamba, zoedepth\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior.\r\n\r\n**Expected behavior**\r\nA clear and concise description of what you expected to happen.\r\n\r\n**Screenshots**\r\nIf applicable, add screenshots to help explain your problem.\r\n\r\n**Additional context**\r\nAdd any other context about the problem here.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["tokenizer.json", "tokenizer_config.json"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["tokenizer_config.json", "tokenizer.json"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "c529bd4f1cb3a8abc53574b7211fc0b887107073", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/98", "iss_label": "wontfix", "title": "IndexError: list index out of range on training", "body": "```\r\n# python3.6 faceswap.py train -A ~/faceswap/data/trump -B ~/faceswap/data/stalone -m ~/faceswap/models/\r\nModel A Directory: /root/faceswap/data/trump\r\nModel B Directory: /root/faceswap/data/stalone\r\nTraining data directory: /root/faceswap/models\r\nLoading data, this may take a while...\r\nLoading Model from Model_Original plugin...\r\n/usr/local/lib/python3.6/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.\r\n from ._conv import register_converters as _register_converters\r\nUsing TensorFlow backend.\r\n/usr/lib/python3.6/importlib/_bootstrap.py:219: RuntimeWarning: compiletime version 3.5 of module 'tensorflow.python.framework.fast_tensor_util' does not match runtime version 3.6\r\n return f(*args, **kwds)\r\nFailed loading existing training data.\r\nUnable to open file (unable to open file: name = '/root/faceswap/models/encoder.h5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)\r\nLoading Trainer from Model_Original plugin...\r\nStarting. Press \"Enter\" to stop training and save model\r\nException in thread Thread-2:\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/threading.py\", line 916, in _bootstrap_inner\r\n self.run()\r\n File \"/root/faceswap/lib/utils.py\", line 42, in run\r\n for item in self.generator:\r\n File \"/root/faceswap/lib/training_data.py\", line 43, in minibatch\r\n rtn = numpy.float32([read_image(data[j]) for j in range(i,i+size)])\r\n File \"/root/faceswap/lib/training_data.py\", line 43, in <listcomp>\r\n rtn = numpy.float32([read_image(data[j]) for j in range(i,i+size)])\r\nIndexError: list index out of range\r\n```\r\n\r\n## Expected behavior\r\nThere shouldn't be \"IndexError: list index out of range\"\r\n\r\n## Actual behavior\r\n\r\n*Describe, in some detail, what the program does instead. Be sure to include any error message or screenshots.*\r\n\r\n## Steps to reproduce\r\n\r\n## Other relevant information\r\nH/W: 4 cores, 16GB, Nvidial P100\r\nS/W: Ubuntu 16.04, NVIDIA binary driver - version 384.111\r\nCUDA 8.0\r\nCuDNN 6\r\nPython 3.6\r\nfaceswap commit: 0f8d9db826d7588f9feb151ab234f2aaf0d8ecf2\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c529bd4f1cb3a8abc53574b7211fc0b887107073', 'files': [{'path': 'lib/training_data.py', 'Loc': {\"(None, 'minibatch', 33)\": {'mod': [38]}}, 'status': 'modified'}, {'path': 'lib/cli/args_train.py', 'Loc': {\"('TrainArgs', 'get_argument_list', 35)\": {'mod': [140]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/cli/args_train.py", "lib/training_data.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "183aee37e93708c0ae73845face5b4469319ebd3", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1208", "iss_label": "", "title": "[Question] Which part of code to implement 'Configure Settings' GUI?", "body": "Which part of code to implement 'Configure Settings' GUI?\r\n\r\n![a](https://user-images.githubusercontent.com/32773605/152643917-b26f4b16-71e0-4f9a-8209-93206355f1b6.jpg)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '183aee37e93708c0ae73845face5b4469319ebd3', 'files': [{'path': 'lib/gui/popup_configure.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/gui/popup_configure.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "3ba44f75518e8010befab88042247e5147d0f212", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/15", "iss_label": "question\ndata", "title": "do i have to rename the given training data to src? ", "body": "if not, where to put the unzip data into directory. sorry for asking newby questions. \r\ni am using pycharm and docker. thanks\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '3ba44f75518e8010befab88042247e5147d0f212', 'files': [{'path': 'convert_trump_cage.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["convert_trump_cage.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "68ef3b992674d87d0c73da9c29a4c5a0e735f04b", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/101", "iss_label": "", "title": "help me", "body": "virtualenv '/home/test/Desktop/faceswap-master'\r\npython3.5 '/home/test/Desktop/faceswap-master/faceswap.py' -h\r\n\r\ntest@ubuntu:~$ virtualenv '/home/test/Desktop/faceswap-master'\r\nNew python executable in /home/test/Desktop/faceswap-master/bin/python\r\nInstalling setuptools, pip, wheel...done.\r\ntest@ubuntu:~$ python3.5 '/home/test/Desktop/faceswap-master/faceswap.py' -h\r\nTraceback (most recent call last):\r\n File \"/home/test/Desktop/faceswap-master/faceswap.py\", line 8, in <module>\r\n from lib.utils import FullHelpArgumentParser\r\n File \"/home/test/Desktop/faceswap-master/lib/utils.py\", line 5, in <module>\r\n from scandir import scandir\r\nImportError: No module named 'scandir'\r\ntest@ubuntu:~$ \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '68ef3b992674d87d0c73da9c29a4c5a0e735f04b', 'files': [{'path': 'requirements-gpu.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements-gpu.txt"], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "629c02a61e1ad5f769f8f7388a091d5ce9aa8160", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1254", "iss_label": "", "title": "Can't Open GUI on Windows", "body": "**Describe the bug**\r\nWhenever I try to open the GUI of Faceswap, I get an error and it doesn't open. I am on Windows, and I have uninstalled and reinstalled multiple times, including redoing the conda environment. CLI functions work, but the main GUI does not open, either from the shortcut or a manual terminal run. I have also tried running with and without admin\r\n\r\n**To Reproduce**\r\nSteps to reproduce the behavior:\r\n1. Uninstall old Faceswap versions\r\n2. Install the latest windows version\r\n3. Run the Faceswap program in GUI mode\r\n4. See error\r\n\r\n**Expected behavior**\r\nI want the Faceswap GUI to open. It doesn't.\r\n\r\n**Screenshots**\r\n![image](https://user-images.githubusercontent.com/63259343/183342692-6f1c1bec-df9b-4f71-8a23-f2f77fccc008.png)\r\n![image](https://user-images.githubusercontent.com/63259343/183342835-998834e0-66d6-4751-84e9-a1c150e22063.png)\r\n\r\n\r\n**Desktop:**\r\n - OS: [Windows 11]\r\n - Python Version [3.9.12]\r\n - Conda Version [4.13.0]\r\n - Commit ID [6b2aac6]\r\n\r\n\r\n**Crash Report**\r\n[crash_report.2022.08.07.224753577271.log](https://github.com/deepfakes/faceswap/files/9278810/crash_report.2022.08.07.224753577271.log)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '629c02a61e1ad5f769f8f7388a091d5ce9aa8160', 'files': [{'path': 'requirements/_requirements_base.txt', 'Loc': {'(None, None, 15)': {'mod': [15]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements/_requirements_base.txt"], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9696b5606fd0963814fc0c3644565aa60face69d", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/462", "iss_label": "", "title": "Modify extractor to focus on mouth", "body": "I'd like to modify the extractor script to focus on the lower half of the face - specifically the mouth area. \r\n\r\nI'm experimenting with changing people's mouth movements, and I want to train a higher resolution \"mouth only\" network, so I can create new speech patterns that are re-composited onto the original footage. \r\n\r\nIs there a way to modify which facial landmarks the extractor looks at so it just takes the mouth?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9696b5606fd0963814fc0c3644565aa60face69d', 'files': [{'path': 'lib/aligner.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/aligner.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "9fb70f13552927bea1bf65fe35f4866f99171eaf", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/656", "iss_label": "", "title": "Not showing graph in gui", "body": "in log gui:\r\n`Exception in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/tkinter/__init__.py\", line 1705, in __call__\r\n return self.func(*args)\r\n File \"/home/telecast/Documents/faceswap/lib/gui/command.py\", line 461, in <lambda>\r\n command=lambda cmd=action: cmd(self.command))\r\n File \"/home/telecast/Documents/faceswap/lib/gui/utils.py\", line 550, in load\r\n self.add_to_recent(cfgfile.name, command)\r\n File \"/home/telecast/Documents/faceswap/lib/gui/utils.py\", line 596, in add_to_recent\r\n recent_files = self.serializer.unmarshal(inp.read().decode(\"utf-8\"))\r\n File \"/home/telecast/Documents/faceswap/lib/Serializer.py\", line 61, in unmarshal\r\n return json.loads(input_string)\r\n File \"/usr/lib/python3.6/json/__init__.py\", line 354, in loads\r\n return _default_decoder.decode(s)\r\n File \"/usr/lib/python3.6/json/decoder.py\", line 339, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"/usr/lib/python3.6/json/decoder.py\", line 357, in raw_decode\r\n raise JSONDecodeError(\"Expecting value\", s, err.value) from None\r\njson.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)\r\n`\r\n\r\n faceswap.log:\r\n`03/10/2019 00:02:36 MainProcess training_0 train training INFO Loading data, this may take a while...\r\n03/10/2019 00:02:36 MainProcess training_0 plugin_loader _import INFO Loading Model from Villain plugin...\r\n03/10/2019 00:02:40 MainProcess training_0 config load_config INFO Loading config: '/home/telecast/Documents/faceswap/config/train.ini'\r\n03/10/2019 00:02:40 MainProcess training_0 _base replace_config INFO Using configuration saved in state file\r\n03/10/2019 00:02:40 MainProcess training_0 deprecation new_func WARNING From /home/telecast/Documents/faceswap_env/lib/python3.6/site-packages/tensorflow/python/framework/op_def_library.py:263: colocate_with (from tensorflow.python.framework.ops) is deprecated and will be removed in a future version.\\nInstructions for updating:\\nColocations handled automatically by placer.\r\n03/10/2019 00:02:49 MainProcess training_0 _base load WARNING Failed loading existing training data. Generating new models\r\n03/10/2019 00:02:52 MainProcess training_0 plugin_loader _import INFO Loading Trainer from Original plugin...\r\n03/10/2019 00:02:54 MainProcess training_0 _base set_tensorboard INFO Enabled TensorBoard Logging\r\n03/10/2019 00:02:54 MainProcess training_0 deprecation new_func WARNING From /home/telecast/Documents/faceswap_env/lib/python3.6/site-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.\\nInstructions for updating:\\nUse tf.cast instead.\r\n03/10/2019 00:03:35 MainProcess training_0 _base save_models INFO saved models\r\n03/10/2019 00:04:29 MainProcess MainThread train end_thread INFO Exit requested! The trainer will complete its current cycle, save the models and quit (it can take up a couple of seconds depending on your training speed). If you want to kill it now, press Ctrl + c\r\n03/10/2019 00:04:31 MainProcess training_0 _base save_models INFO saved models`\r\n\r\n$ cat /etc/*release\r\nDISTRIB_ID=Ubuntu\r\nDISTRIB_RELEASE=18.10\r\n\r\n$ pip3 list\r\nPackage Version \r\n----------------------- --------\r\nabsl-py 0.7.0 \r\nastor 0.7.1 \r\nClick 7.0 \r\ncloudpickle 0.8.0 \r\ncmake 3.13.3 \r\ncycler 0.10.0 \r\ndask 1.1.3 \r\ndecorator 4.3.2 \r\ndlib 19.16.0 \r\nface-recognition 1.2.3 \r\nface-recognition-models 0.3.0 \r\nffmpy 0.2.2 \r\ngast 0.2.2 \r\ngrpcio 1.19.0 \r\nh5py 2.9.0 \r\nKeras 2.2.4 \r\nKeras-Applications 1.0.7 \r\nKeras-Preprocessing 1.0.9 \r\nkiwisolver 1.0.1 \r\nMarkdown 3.0.1 \r\nmatplotlib 2.2.2 \r\nmock 2.0.0 \r\nnetworkx 2.2 \r\nnumpy 1.15.4 \r\nnvidia-ml-py3 7.352.0 \r\nopencv-python 4.0.0.21\r\npathlib 1.0.1 \r\npbr 5.1.3 \r\nPillow 5.4.1 \r\npip 19.0.3 \r\nprotobuf 3.7.0 \r\npsutil 5.6.0 \r\npyparsing 2.3.1 \r\npython-dateutil 2.8.0 \r\npytz 2018.9 \r\nPyWavelets 1.0.2 \r\nPyYAML 3.13 \r\nscikit-image 0.14.2 \r\nscikit-learn 0.20.3 \r\nscipy 1.2.1 \r\nsetuptools 40.8.0 \r\nsix 1.12.0 \r\ntensorboard 1.13.1 \r\ntensorflow-estimator 1.13.0 \r\ntensorflow-gpu 1.13.1 \r\ntermcolor 1.1.0 \r\ntoolz 0.9.0 \r\ntqdm 4.31.1 \r\nWerkzeug 0.14.1 \r\nwheel 0.33.1\r\n\r\n$ nvcc --version\r\nnvcc: NVIDIA (R) Cuda compiler driver\r\nCopyright (c) 2005-2018 NVIDIA Corporation\r\nBuilt on Sat_Aug_25_21:08:01_CDT_2018\r\nCuda compilation tools, release 10.0, V10.0.130\r\n\r\n cudnn v7.4.1.5\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9fb70f13552927bea1bf65fe35f4866f99171eaf', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "2", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "e518206c8ef935ebc1b1ff64ae2901cc8ef05f94", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/57", "iss_label": "", "title": "Cannot install tensorflow-gpu requirement", "body": "\r\nTried installing the requirements-gpu.txt and get this error:\r\n\r\nCollecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) Cache entry deserialization failed, entry ignored Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: ) No matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n\r\nI went here to troubleshoot the issue: https://github.com/tensorflow/tensorflow/issues/8251\r\nInstalled Python 64bit. Opened new command prompt window and typed in: pip3 install --upgrade tensorflow-gpu\r\n\r\nSuccessfully uninstalled setuptools-28.8.0\r\nSuccessfully installed bleach-1.5.0 enum34-1.1.6 html5lib-0.9999999 markdown-2.6.11 numpy-1.13.3 protobuf-3.5.1 setuptools-38.4.0 six-1.11.0 tensorflow-gpu-1.4.0 tensorflow-tensorboard-0.4.0rc3 werkzeug-0.14.1 wheel-0.30.0\r\n\r\nWent back to my faceswap env to enter the requirements-gpu.txt and still get the same error:\r\n(faceswap) C:\\faceswap>pip install -r requirements-gpu.txt\r\nCollecting tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n Could not find a version that satisfies the requirement tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6)) (from versions: )\r\nNo matching distribution found for tensorflow-gpu==1.4.0 (from -r requirements-gpu.txt (line 6))\r\n\r\n## Other relevant information\r\n\r\n- **Operating system and version:** Windows 10\r\nPython 3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:54:40) [MSC v.1900 64 bit (AMD64)] on win32\r\n- **Faceswap version:** 1/5/2018\r\n- **Faceswap method:** CPU/GPU \"CPU method only works\"\r\n- ...\r\n\r\n ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e518206c8ef935ebc1b1ff64ae2901cc8ef05f94', 'files': [{'path': 'requirements-gpu.txt', 'Loc': {'(None, None, 6)': {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements-gpu.txt"], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "51f1993d93e0ffb581d44416f327f0cf731c34e8", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/209", "iss_label": "", "title": "doesn't work on 2GB GTX 960 even with LowMem model (what params could be reduced?)", "body": "LowMem is different from the common model with 2 lines:\r\nENCODER_DIM = 512 # instead of 1024\r\n#x = self.conv(1024)(x) - commented out.\r\n\r\nBut it's still not enough to run under Ubuntu 16.04, cuda8, 1.7Gb of free video RAM.\r\nIt fails with OOM on any batch size, even with bs=1 and bs=2.\r\n\r\nWhat about having some configurable params here? Like reducing filters numbers or ENCODER_DIM or smth else? \r\nAlso that would be great to have some doc which describes few main params and their influence on quality etc. For example fakeapp allows to select number of layers, nodes etc.\r\n\r\nP.S. I managed to run it with ENCODER_DIM = 64 and bs=16, but results are not so good (after 15 hours).\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '51f1993d93e0ffb581d44416f327f0cf731c34e8', 'files': [{'path': 'faceswap.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["faceswap.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "deepfakes", "repo_name": "faceswap", "base_commit": "a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd", "iss_html_url": "https://github.com/deepfakes/faceswap/issues/1361", "iss_label": "", "title": "Bounding boxes coordinates", "body": "It has been 2 weeks I have been working on it but cannot find the solution.\r\n\r\nI want the bounding boxes on the original image, of the result that is produced by the \"Extract\" process of faceswap code.\r\n\r\n\"Extract\" writes the faces extracted from the input image(s). I just want the coordinates from which this face is extracted (from original image).\r\n\r\nIf you could help me. I would be very grateful and would also help other people searching for the same problem.\r\nThank you.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a62a85c0215c1d791dd5ca705ba5a3fef08f0ffd', 'files': [{'path': 'lib/align/detected_face.py', 'Loc': {\"('DetectedFace', '__init__', 82)\": {'mod': [84, 85, 86, 87]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["lib/align/detected_face.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "iss_html_url": "https://github.com/3b1b/manim/issues/659", "iss_label": "", "title": "Problem with FadeOutAndShift", "body": "t3 text is not going through FadeOutAndShift.\r\nAlso tell me how I can FadeOutAndShift t1 and t3 together\r\n\r\n```# python -m manim try3.py test1 -pm\r\n\r\nfrom manimlib.imports import *\r\n\r\nclass test1(Scene):\r\n\tdef construct(self):\r\n\t\tt1=TextMobject(\"Hi!\")\r\n\t\tt2=TextMobject(\"My name is\")\r\n\t\tt3=TextMobject(\"Girish\")\r\n\r\n\t\tt1.set_color(RED)\r\n\t\tt3.set_color(BLUE)\r\n\r\n\t\tself.play(Write(t1), run_time=2)\r\n\t\tself.play(ApplyMethod(t1.shift, 1*UP))\r\n\t\tself.play(FadeIn(t2))\r\n\t\tself.play(Transform(t2, t3), run_time=2)\r\n\t\tself.wait(2)\r\n\t\tself.play(FadeOutAndShift(t1))\r\n self.play(FadeOutAndShift(t3))\r\n\t\t\r\n\r\n\t\t\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '49582c35919097585699598ad0ca49fe3f2117b5', 'files': [{'path': 'manimlib/scene/scene.py', 'Loc': {\"('Scene', 'play', 455)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/scene/scene.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "ce06e58505dff26cccd497a9bd43969f74ae0da9", "iss_html_url": "https://github.com/3b1b/manim/issues/274", "iss_label": "", "title": "ImportError: No module named animation", "body": "I've installed manim on Win10. After run \"python extract_scene.py -s example_scenes.py\",\r\n\r\nthe next error is shown in the python interactive interpretor:\r\n\r\n> Traceback (most recent call last):\r\n File \"extract_scene.py\", line 15, in <module>\r\n from scene.scene import Scene\r\n File \"G:\\python\\manim\\scene\\scene.py\", line 16, in <module>\r\n from animation.transform import MoveToTarget\r\n File \"G:\\python\\manim\\animation\\transform.py\", line 8, in <module>\r\n from animation.animation import Animation\r\nImportError: No module named animation\r\n\r\nWhat I can do? I'm looking forward to get help to solve this problem. ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ce06e58505dff26cccd497a9bd43969f74ae0da9', 'files': [{'path': 'animation/transform.py', 'Loc': {'(None, None, None)': {'mod': [8]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["animation/transform.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "55ece141e898577ce44e71d718212a1ee816ed74", "iss_html_url": "https://github.com/3b1b/manim/issues/658", "iss_label": "", "title": "How to add sound to video?", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '55ece141e898577ce44e71d718212a1ee816ed74', 'files': [{'path': 'manimlib/scene/scene.py', 'Loc': {\"('Scene', 'add_sound', 543)\": {'mod': []}}, 'status': 'modified'}, {'path': 'old_projects/clacks/solution2/simple_scenes.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["old_projects/clacks/solution2/simple_scenes.py", "manimlib/scene/scene.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "97a0a707d759e0235450ea8c20f55a2529bd2973", "iss_html_url": "https://github.com/3b1b/manim/issues/878", "iss_label": "", "title": "Swedish characters not working", "body": "\r\n\r\nInclude at least:\r\n1. Steps to reproduce the issue (e.g. the command you ran)\r\n2. The unexpected behavior that occurred (e.g. error messages or screenshots)\r\n3. The environment (e.g. operating system and version of manim)\r\n\r\nI am new to manim and want to include swedish characters in a text, but it gives an error message when rendering.\r\nCode:\r\nclass Swe(Scene):\r\n\tdef construct(self):\r\n\t\ttext = TextMobject(r\"$\\\"o$\")\r\n\t\tself.add(text)\r\n\t\tself.wait()\r\n\r\nError message:\r\nTraceback (most recent call last):\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\extract_scene.py\", line 153, in main\r\n scene = SceneClass(**scene_kwargs)\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\scene\\scene.py\", line 54, in __init__\r\n self.construct()\r\n File \"Geony.py\", line 115, in construct\r\n text = TextMobject(r\"$\\\"o$\")\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\mobject\\svg\\tex_mobject.py\", line 144, in __init__\r\n self, self.arg_separator.join(tex_strings), **kwargs\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\mobject\\svg\\tex_mobject.py\", line 45, in __init__\r\n self.template_tex_file_body\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\utils\\tex_file_writing.py\", line 19, in tex_to_svg_file\r\n dvi_file = tex_to_dvi(tex_file)\r\n File \"C:\\Manim\\manim\\manim2020\\manimlib\\utils\\tex_file_writing.py\", line 67, in tex_to_dvi\r\n \"See log output above or the log file: %s\" % log_file)\r\nException: Latex error converting to dvi. See log output above or the log file: C:\\Manim\\manim\\manim2020\\manimlib\\files\\Tex\\a26fbd67dc90adbc.log\r\n\r\nI am running python 3.7 (64 bit) and MikTex 2.9. All other features of manim are working fine.\r\nAny help would be much appreciated. Also, please keep in mind that I am new to manim and programing in general.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [12], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "6880ebcbc2525b2f3c0731439bef7ff981b4b5b4", "iss_html_url": "https://github.com/3b1b/manim/issues/924", "iss_label": "", "title": "Reconsidering TEX_USE_CTEX / using XeLaTeX", "body": "I worked on manim back in 2018. I added the function for using CTeX (XeLaTeX package for Chinese) and XeLaTeX instead of LaTeX using the flag `TEX_USE_CTEX` in constants.py (#315).\r\n\r\nI have stopped working on manim since 2019, but over the months there are apparently more and more people who want to use LaTeX rendering in non-English languages, and even on very old issues I still occasionally see people asking how to do that... Looking back at my change I really should have **decoupled using CTeX (TeX template) from XeLaTeX (rendering tool)**. This has caused a *lot* of confusions and made weird hacks/fixes necessary for only using XeLaTeX, especially for a language that is not Chinese or English, with the most recent #858 and #840. It really should have been a flag `TEX_USE_XELATEX` and another flag `TEMPLATE_TEX_NAME`, and the flag `TEX_USE_CTEX` is such that when it is `True`, `TEX_USE_XELATEX` is `True` and `TEMPLATE_TEX_NAME` is `\"ctex_template.tex\"`; otherwise `TEX_USE_XELATEX` is `False` and `TEMPLATE_TEX_NAME` is `\"tex_template.tex\"`. Then set `TEMPLATE_TEX_FILE` to `os.path.join(os.path.dirname(os.path.realpath(__file__)), TEMPLATE_TEX_NAME)`. Corresponding logic: constants.py lines 74\u201379.\r\n\r\nIt might be even better to set it dynamically using a function or as a parameter of `TexMobject()`, (see issues like #891). I looked at the source code and this is definitely possible. The options I can think of are\r\n1. Use the current `TEX_USE_CTEX`\r\n2. Add flags `TEX_USE_XELATEX` and `TEMPLATE_TEX_NAME`, and rework `TEX_USE_CTEX`\r\n3. Add parameters for `TexMobject()` like `use_xelatex=False` and `tex_template=\"tex_template.tex\"`\r\n4. Use the flags of 2. as a default, and make it possible to change the default using 3.\r\n\r\nNot really sure if this is the right place to raise this issue.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "ManimCommunity"}, {"pro": "manim", "path": ["manim/utils/tex_templates.py"]}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manim/utils/tex_templates.py"], "doc": [], "test": [], "config": [], "asset": ["ManimCommunity"]}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "49582c35919097585699598ad0ca49fe3f2117b5", "iss_html_url": "https://github.com/3b1b/manim/issues/660", "iss_label": "", "title": "ColorByCaracter help ", "body": "I want to color only theta of ```{ e }^{ i\\theta }```\r\n\r\nI was going through ColorByCaracter in 3_text_like_arrays.py . \r\nBut I fail to understand how you people separate the tex formula into arrays. I know about arrays but I can only copy the tex code from [Daum Equation Editor](http://s1.daumcdn.net/editor/fp/service_nc/pencil/Pencil_chromestore.html) and paste it. I don't know how to divide them into arrays.\r\n\r\nPlease help me.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '49582c35919097585699598ad0ca49fe3f2117b5', 'files': [{'path': 'manimlib/mobject/svg/tex_mobject.py', 'Loc': {\"('TexMobject', None, 132)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/svg/tex_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "32abbb9371308e8dff7410de387fe78e64b6fe7a", "iss_html_url": "https://github.com/3b1b/manim/issues/700", "iss_label": "", "title": "OSError: No file matching Suv.svg in image directory", "body": "I've tried putting the .SVG image into */media/designs/svg_images. But when I want to quote it in the .py file it still reports errors:\r\n\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/jason/Documents/manim/manimlib/extract_scene.py\", line 155, in main\r\n scene = SceneClass(**scene_kwargs)\r\n File \"/home/jason/Documents/manim/manimlib/scene/scene.py\", line 53, in __init__\r\n self.construct()\r\n File \"SVGTEST.py\", line 44, in construct\r\n height=height_size\r\n File \"/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py\", line 45, in __init__\r\n self.ensure_valid_file()\r\n File \"/home/jason/Documents/manim/manimlib/mobject/svg/svg_mobject.py\", line 63, in ensure_valid_file\r\n self.file_name)\r\nOSError: No file matching MYSVG.svg in image directory\r\n\r\n```\r\n(Manjaro Linux, Texlive)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '32abbb9371308e8dff7410de387fe78e64b6fe7a', 'files': [{'path': 'manimlib/mobject/svg/svg_mobject.py', 'Loc': {\"('SVGMobject', 'ensure_valid_file', 49)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/svg/svg_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "b74e5ca254bccc1575b4c7b7de3c1cb2010aac75", "iss_html_url": "https://github.com/3b1b/manim/issues/694", "iss_label": "", "title": "can't graph trigonometric function of secx, cscx, cotx, tanx,...", "body": "source code:\r\n\r\nclass PlotFunctions(GraphScene):\r\n CONFIG = {\r\n \"x_min\" : -10,\r\n \"x_max\" : 10.3,\r\n \"y_min\" : -1.5,\r\n \"y_max\" : 1.5,\r\n \"graph_origin\" : ORIGIN ,\r\n \"function_color\" : RED ,\r\n \"axes_color\" : GREEN,\r\n \"x_labeled_nums\" :range(-10,12,2),\r\n\r\n }\r\n def construct(self):\r\n self.setup_axes(animate=True)\r\n func_graph=self.get_graph(self.func_to_graph,self.function_color)\r\n func_graph2=self.get_graph(self.func_to_graph2)\r\n vert_line = self.get_vertical_line_to_graph(TAU,func_graph,color=YELLOW)\r\n graph_lab = self.get_graph_label(func_graph, label = \"\\\\cos(x)\")\r\n graph_lab2=self.get_graph_label(func_graph2,label = \"\\\\sin(x)\", x_val=-10, direction=UP/2)\r\n two_pi = TexMobject(\"x = 2 \\\\pi\")\r\n label_coord = self.input_to_graph_point(TAU,func_graph)\r\n two_pi.next_to(label_coord,RIGHT+UP)\r\n\r\n\r\n\r\n self.play(ShowCreation(func_graph),ShowCreation(func_graph2))\r\n self.play(ShowCreation(vert_line), ShowCreation(graph_lab), ShowCreation(graph_lab2),ShowCreation(two_pi))\r\n\r\n\r\n def func_to_graph(self,x):\r\n #return np.cos(x)\r\n return np.tan(x)\r\n\r\n def func_to_graph2(self,x):\r\n return np.sin(x)\r\n\r\nI replaced \"return np.cos(x)\" to \"return np.tan(x)\"...i got this:\r\n![image](https://user-images.githubusercontent.com/36161299/63267544-e140a700-c2c4-11e9-9164-a14d37ee8673.png)\r\n\r\nand then I replaced \"return np.cos(x)\" to \"return np.sec(x)/cot(x)/csc(x)\"...i got this:\r\nAttributeError: module 'numpy' has no attribute 'sec'...\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b74e5ca254bccc1575b4c7b7de3c1cb2010aac75', 'files': [{'path': 'manimlib/mobject/types/vectorized_mobject.py', 'Loc': {\"('VGroup', None, 868)\": {'mod': []}}, 'status': 'modified'}, {'Loc': [17], 'path': None}]}", "own_code_loc": [{"Loc": [17], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "loctype": {"code": [null, "manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "fc153bb49a529e8cbb02dd1514f06387cbf0ee6e", "iss_html_url": "https://github.com/3b1b/manim/issues/1206", "iss_label": "", "title": "Manim can't find my png file", "body": "I'm new to coding and am trying to learn manim, which I'm using on my macbook pro. I'm trying to create a scene where manim draws a png file I saved. I saved the png file as \"shirt.png\" in my manim folder. I then ran the following code:\r\n\r\n\r\n```\r\nfrom manimlib.imports import *\r\n\r\nclass OutFit(Scene):\r\n\tdef construct(self):\r\n\t\t\r\n\t\tshirt = ImageMobject(\"shirt\")\r\n\t\t\r\n\t\tself.play(Write(shirt))\r\n```\r\nI've looked up several ways of how to get manim to do images and some solutions, but since I'm pretty new at this I don't always understand the answers I've found from other people's issues or if it applies to mine. I keep getting this error response:\r\n\r\nraise IOError(\"File {} not Found\".format(file_name))\r\nOSError: File shirt not Found\r\n\r\nAny help is much appreciated. \r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fc153bb49a529e8cbb02dd1514f06387cbf0ee6e', 'files': [{'path': 'manimlib/animation/fading.py', 'Loc': {\"('FadeIn', None, 34)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/animation/fading.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "3b1b", "repo_name": "manim", "base_commit": "64c960041b5b9dcb0aac50019268a3bdf69d9563", "iss_html_url": "https://github.com/3b1b/manim/issues/608", "iss_label": "", "title": "What is VMobject exactly?", "body": "Can anyone explain what is the purpose of `VMobject` and how it differs from `Mobject`?\r\n\r\nI am trying to make some `old_projects` work. For example, I had to change `PMobject` to inherit from `VMobject` instead of `Mobject` in order to fix `NumberLineScene`. I do not know if it is correct thing to do or how will it affect the other scripts because I am unable to find the fundamental differences between the two objects. The wiki does not explain a lot, so please tell some detailed information.\r\n\r\nI dug commit histories and saw \r\n\r\n> \"Starting to vectorize all things\"\r\n\r\n kind of commit messages when the `VMobject` class is added to the engine. What does it mean \"Vectorize\" in this context?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '64c960041b5b9dcb0aac50019268a3bdf69d9563', 'files': [{'path': 'manimlib/mobject/types/vectorized_mobject.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["manimlib/mobject/types/vectorized_mobject.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "a2779fe2f6c9ab29508676f21242b1c6b88e2f67", "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/5229", "iss_label": "documentation\nenhancement\nfix-me", "title": "[Documentation]: Micro-agents", "body": "**What problem or use case are you trying to solve?**\r\n\r\nCurrently in the `openhands/agenthub/codeact_agent` directory, we have an implementation of micro agents, but this is not documented.\r\n\r\nTo do so, we can:\r\n1. read the implementation of codeact agent\r\n2. read an example microagent in `openhands/agenthub/codeact_agent/micro/github.md`\r\n3. add documentation to `openhands/agenthub/codeact_agent/README.md`\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a2779fe2f6c9ab29508676f21242b1c6b88e2f67', 'files': [{'path': 'microagents/README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["microagents/README.md"], "test": [], "config": [], "asset": []}} +{"organization": "All-Hands-AI", "repo_name": "OpenHands", "base_commit": "08a2dfb01af1aec6743f5e4c23507d63980726c0", "iss_html_url": "https://github.com/All-Hands-AI/OpenHands/issues/635", "iss_label": "bug", "title": "Ollama support issue.", "body": "<!-- You MUST fill out this template. We will close issues that don't include enough information to reproduce -->\r\n#### Describe the bug\r\n\r\nWhen trying to configure OpenDevin to run with Ollama there are requests that are being sent to the ollama server like this:\r\n\r\n\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/1931e068-0341-429b-8c4e-0dd2da36f54c)\r\n\r\n\r\nThe post request should look like this:\r\n`\"POST /chat/completions HTTP/1.1\"`\r\n\r\n<!-- a short description of the problem -->\r\n\r\n#### Setup and configuration\r\n**Current version**:\r\n<!-- run `git log -n 1` to see this -->\r\n```bash\r\ncommit 5c640c99cafb3c718dad60f377f3a725a8bab1de (HEAD -> local-llm-flag, origin/main, origin/HEAD, main)\r\n```\r\n\r\n<!-- tell us everything about your environment -->\r\n**My config.toml and environment vars** (be sure to redact API keys):\r\n```toml\r\nWORKSPACE_DIR=\"./workspace\"\r\nLLM_BASE_URL=\"http://localhost:8000\"\r\nLLM_MODEL=\"ollama/starcoder2:15b\"\r\nLLM_EMBEDDING_MODEL=\"ollama/starcoder2:15b\"\r\n```\r\n\r\n**My model and agent** (you can see these settings in the UI):\r\n* Model: ollama/starcoder2\r\n* Agent: MonologueAgent\r\n\r\n**Commands I ran to install and run OpenDevin**:\r\n```\r\ngit clone ...\r\nmake build\r\nmake start-backend\r\nmake start-frontend\r\n```\r\n\r\n**Steps to Reproduce**:\r\n1. In `opendevin/llm/llm.py` in `__init__` replace `self.model = model if model else DEFAULT_MODEL_NAME` with `self.model_name = DEFAULT_MODEL_NAME`\r\n2. Run your local model on litellm `litellm --model ollama/starcoder2:15b --port 8000`\r\n3. Run `make build` then `make start-backend` and `make start-frontend`\r\n4. Ask devin to do anything ex 'make a hello world script in python'\r\n5. Observe 404 errors spammed in litellm server log\r\n\r\n**Logs, error messages, and screenshots**:\r\nThis is a log from the backend server running from `make start-backend` steps 0-99 all look the same.\r\n```\r\n==============\r\nSTEP 99\r\n\r\nPLAN:\r\nplease make a simple flask app that says hello world.\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1436, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 31, in condense\r\n resp = llm.completion(messages=messages)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 328, in completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 325, in completion\r\n response = self.function_with_fallbacks(**kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1419, in function_with_fallbacks\r\n raise original_exception\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1344, in function_with_fallbacks\r\n response = self.function_with_retries(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1496, in function_with_retries\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1462, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nERROR:\r\nError condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1436, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 31, in condense\r\n resp = llm.completion(messages=messages)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 328, in completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 325, in completion\r\n response = self.function_with_fallbacks(**kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1419, in function_with_fallbacks\r\n raise original_exception\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1344, in function_with_fallbacks\r\n response = self.function_with_retries(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1496, in function_with_retries\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 1462, in function_with_retries\r\n response = original_function(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 386, in _completion\r\n raise e\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 334, in _completion\r\n deployment = self.get_available_deployment(\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/.local/share/virtualenvs/OpenDevin-thTG-Evv/lib/python3.11/site-packages/litellm/router.py\", line 2313, in get_available_deployment\r\n raise ValueError(f\"No healthy deployment available, passed model={model}\")\r\nValueError: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/home/quimbo/OpenDevin/opendevin/controller/agent_controller.py\", line 112, in step\r\n action = self.agent.step(self.state)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/agent.py\", line 153, in step\r\n self._add_event(prev_action.to_dict())\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/agent.py\", line 96, in _add_event\r\n self.monologue.condense(self.llm)\r\n File \"/home/quimbo/OpenDevin/agenthub/monologue_agent/utils/monologue.py\", line 36, in condense\r\n raise RuntimeError(f\"Error condensing thoughts: {e}\")\r\nRuntimeError: Error condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\n\r\nOBSERVATION:\r\nError condensing thoughts: No healthy deployment available, passed model=ollama/starcoder2:15b\r\nExited before finishing\r\n```\r\n\r\n#### Additional Context\r\n\r\nLitellm for local models is expecting api calls in the following format:\r\n\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/67b10c26-a9e6-44a1-a79e-908fc7d3749f)\r\n\r\nFrom: `http://localhost:8000/#/`\r\n\r\nI know that the problem is whatever is managing the api calls is set to call `/api/generate/` because this is the convention, but for local server that is not supported. I do not know where to look to fix this, any ideas?\r\n\r\nThe server responds when I test it like this:\r\n```\r\ndef query_local_llm(prompt, limit=TOKEN_LIMIT):\r\n # Replace with your actual server address and port\r\n url = \"http://0.0.0.0:8000/chat/completions\"\r\n payload = {\r\n \"model\": \"ollama/mistral\",\r\n \"messages\" : [{\"content\": prompt, \"role\": \"user\"}],\r\n \"max_tokens\": limit\r\n }\r\n response = requests.post(url, json=payload)\r\n```\r\n![image](https://github.com/OpenDevin/OpenDevin/assets/76570167/b9bae877-5bd4-4864-b672-9678bb9a294e)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '08a2dfb01af1aec6743f5e4c23507d63980726c0', 'files': [{'path': 'opendevin/llm/LOCAL_LLM_GUIDE.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["opendevin/llm/LOCAL_LLM_GUIDE.md"], "test": [], "config": [], "asset": []}} +{"organization": "scrapy", "repo_name": "scrapy", "base_commit": "d636e5baa8a077e2869bfe3b76525efec42392ec", "iss_html_url": "https://github.com/scrapy/scrapy/issues/2276", "iss_label": "", "title": "can LinkExtractor extract scrapy.link with node info", "body": "the html is like below, i want to extract the link `/example/category/pg{page}/`, but the `scrapy.link` does not contains the node info(`currentPage` and `totalPage`), how can i extract the link with the node info \n\n``` html\n<div class=\"page-box\">\n <div page-url=\"/example/category/pg{page}/\"\n totalPage=\"35\"\n currentPage=\"1\" \n </div>\n</div>\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd636e5baa8a077e2869bfe3b76525efec42392ec', 'files': [{'path': 'scrapy/http/response/text.py', 'Loc': {\"('TextResponse', 'css', 117)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/http/response/text.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "scrapy", "repo_name": "scrapy", "base_commit": "892467cb8a40c54840284a08d0f98ab1b3af7bc4", "iss_html_url": "https://github.com/scrapy/scrapy/issues/4565", "iss_label": "", "title": "AttributeError: module 'resource' has no attribute 'getrusage'", "body": "version : Scrapy 2.1.0\r\n\r\n```\r\n2020-05-11 20:05:28 [scrapy.core.engine] INFO: Spider opened\r\n2020-05-11 20:05:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)\r\n2020-05-11 20:05:28 [dy] INFO: Spider opened: dy\r\n2020-05-11 20:05:28 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method MemoryUsage.engine_started of <scrapy.extensions.memusage.MemoryUsage object at 0x0000000004D3A358>>\r\nTraceback (most recent call last):\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\utils\\defer.py\", line 161, in maybeDeferred_coro\r\n result = f(*args, **kw)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\pydispatch\\robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 55, in engine_started\r\n self.crawler.stats.set_value('memusage/startup', self.get_virtual_size())\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 48, in get_virtual_size\r\n size = self.resource.getrusage(self.resource.RUSAGE_SELF).ru_maxrss\r\nAttributeError: module 'resource' has no attribute 'getrusage'\r\n```\r\n\r\n```\r\n2020-05-11 20:05:43 [scrapy.core.engine] INFO: Closing spider (finished)\r\n2020-05-11 20:05:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:\r\n{'downloader/request_bytes': 6751,\r\n 'downloader/request_count': 14,\r\n 'downloader/request_method_count/GET': 14,\r\n 'downloader/response_bytes': 12380415,\r\n 'downloader/response_count': 14,\r\n 'downloader/response_status_count/200': 10,\r\n 'downloader/response_status_count/302': 4,\r\n 'elapsed_time_seconds': 14.631021,\r\n 'finish_reason': 'finished',\r\n 'finish_time': datetime.datetime(2020, 5, 11, 12, 5, 43, 378200),\r\n 'item_scraped_count': 65,\r\n 'log_count/DEBUG': 85,\r\n 'log_count/ERROR': 1,\r\n 'log_count/INFO': 9,\r\n 'request_depth_max': 1,\r\n 'response_received_count': 10,\r\n 'scheduler/dequeued': 6,\r\n 'scheduler/dequeued/memory': 6,\r\n 'scheduler/enqueued': 6,\r\n 'scheduler/enqueued/memory': 6,\r\n 'start_time': datetime.datetime(2020, 5, 11, 12, 5, 28, 747179)}\r\n2020-05-11 20:05:43 [scrapy.core.engine] INFO: Spider closed (finished)\r\n2020-05-11 20:05:43 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method MemoryUsage.engine_stopped of <scrapy.extensions.memusage.MemoryUsage object at 0x0000000004D3A358>>\r\nTraceback (most recent call last):\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\utils\\defer.py\", line 161, in maybeDeferred_coro\r\n result = f(*args, **kw)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\pydispatch\\robustapply.py\", line 55, in robustApply\r\n return receiver(*arguments, **named)\r\n File \"D:\\microsoft\\python37\\lib\\site-packages\\scrapy\\extensions\\memusage.py\", line 70, in engine_stopped\r\n for tsk in self.tasks:\r\nAttributeError: 'MemoryUsage' object has no attribute 'tasks'\r\n```\r\n\r\n(edited for text formatting)", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '892467cb8a40c54840284a08d0f98ab1b3af7bc4', 'files': [{'path': 'scrapy/commands/settings.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["scrapy/commands/settings.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "commaai", "repo_name": "openpilot", "base_commit": "ce9559cc54433244cb01d4781302eb072a3fd519", "iss_html_url": "https://github.com/commaai/openpilot/issues/30078", "iss_label": "bug\nfingerprint\ncar\nford", "title": "2023 Ford Maverick Not Recognized", "body": "### Describe the bug\n\nCar Not Recognized\r\n\r\nLooks like all the values for firmware are the same as what is already in values.py\n\n### Which car does this affect?\n\nFord Maverick 2023\n\n### Provide a route where the issue occurs\n\n66833387c2bbbca0|2023-09-27--21-13-05\n\n### openpilot version\n\nmaster-ci\n\n### Additional info\n\n`{'carParams': {'alternativeExperience': 1,\r\n 'autoResumeSng': True,\r\n 'carFingerprint': 'mock',\r\n 'carFw': [{'address': 2016,\r\n 'brand': 'ford',\r\n 'bus': 1,\r\n 'ecu': 'engine',\r\n 'fwVersion': b'PZ6A-14C204-JE\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 2024,\r\n 'subAddress': 0},\r\n {'address': 1840,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'eps',\r\n 'fwVersion': b'NZ6C-14D003-AL\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1848,\r\n 'subAddress': 0},\r\n {'address': 1888,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'abs',\r\n 'fwVersion': b'PZ6C-2D053-ED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1896,\r\n 'subAddress': 0},\r\n {'address': 1798,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'fwdCamera',\r\n 'fwVersion': b'NZ6T-14F397-AC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1806,\r\n 'subAddress': 0},\r\n {'address': 1842,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'shiftByWire',\r\n 'fwVersion': b'NZ6P-14G395-AD\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1850,\r\n 'subAddress': 0},\r\n {'address': 1892,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'fwdRadar',\r\n 'fwVersion': b'NZ6T-14D049-AA\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1900,\r\n 'subAddress': 0},\r\n {'address': 2016,\r\n 'brand': 'mazda',\r\n 'bus': 1,\r\n 'ecu': 'engine',\r\n 'fwVersion': b'PZ6A-14C204-JE\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 2024,\r\n 'subAddress': 0},\r\n {'address': 1840,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'eps',\r\n 'fwVersion': b'NZ6C-14D003-AL\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1848,\r\n 'subAddress': 0},\r\n {'address': 1888,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'abs',\r\n 'fwVersion': b'PZ6C-2D053-ED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1896,\r\n 'subAddress': 0},\r\n {'address': 1798,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'fwdCamera',\r\n 'fwVersion': b'NZ6T-14F397-AC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1806,\r\n 'subAddress': 0},\r\n {'address': 1892,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'fwdRadar',\r\n 'fwVersion': b'NZ6T-14D049-AA\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1900,\r\n 'subAddress': 0}],\r\n 'carName': 'mock',\r\n 'carVin': '3FTTW8E31PRA79783',\r\n 'centerToFront': 1.350000023841858,\r\n 'communityFeatureDEPRECATED': False,\r\n 'dashcamOnly': False,\r\n 'directAccelControlDEPRECATED': False,\r\n 'enableApgsDEPRECATED': False,\r\n 'enableBsm': False,\r\n 'enableCameraDEPRECATED': False,\r\n 'enableDsu': False,\r\n 'enableGasInterceptor': False,\r\n 'experimentalLongitudinalAvailable': False,\r\n 'fingerprintSource': 'can',\r\n 'flags': 0,\r\n 'fuzzyFingerprint': False,\r\n 'hasStockCameraDEPRECATED': False,\r\n 'isPandaBlackDEPRECATED': False,\r\n 'lateralTuning': {'pid': {'kf': 0.0}},\r\n 'longitudinalActuatorDelayLowerBound': 0.15000000596046448,\r\n 'longitudinalActuatorDelayUpperBound': 0.15000000596046448,\r\n 'longitudinalTuning': {'deadzoneBP': [0.0], 'deadzoneV': [0.0], 'kf': 1.0, 'kiBP': [0.0], 'kiV': [1.0], 'kpBP': [0.0], 'kpV': [1.0]},\r\n 'mass': 1836.0,\r\n 'maxLateralAccel': 10.0,\r\n 'maxSteeringAngleDegDEPRECATED': 0.0,\r\n 'minEnableSpeed': -1.0,\r\n 'minSpeedCanDEPRECATED': 0.0,\r\n 'minSteerSpeed': 0.0,\r\n 'networkLocation': 'fwdCamera',\r\n 'notCar': False,\r\n 'openpilotLongitudinalControl': False,\r\n 'pcmCruise': True,\r\n 'radarTimeStep': 0.05000000074505806,\r\n 'radarUnavailable': False,\r\n 'rotationalInertia': 3139.534912109375,\r\n 'safetyConfigs': [{'safetyModel': 'noOutput', 'safetyParam': 0, 'safetyParam2DEPRECATED': 0, 'safetyParamDEPRECATED': 0}],\r\n 'safetyModelDEPRECATED': 'silent',\r\n 'safetyModelPassiveDEPRECATED': 'silent',\r\n 'safetyParamDEPRECATED': 0,\r\n 'startAccel': 0.0,\r\n 'startingAccelRateDEPRECATED': 0.0,\r\n 'startingState': False,\r\n 'steerActuatorDelay': 0.0,\r\n 'steerControlType': 'torque',\r\n 'steerLimitAlert': False,\r\n 'steerLimitTimer': 1.0,\r\n 'steerRateCostDEPRECATED': 0.0,\r\n 'steerRatio': 13.0,\r\n 'steerRatioRear': 0.0,\r\n 'stopAccel': -2.0,\r\n 'stoppingControl': True,\r\n 'stoppingDecelRate': 0.800000011920929,\r\n 'tireStiffnessFactor': 1.0,\r\n 'tireStiffnessFront': 201087.203125,\r\n 'tireStiffnessRear': 317877.90625,\r\n 'transmissionType': 'unknown',\r\n 'vEgoStarting': 0.5,\r\n 'vEgoStopping': 0.5,\r\n 'wheelSpeedFactor': 1.0,\r\n 'wheelbase': 2.700000047683716},\r\n 'logMonoTime': 971923210573,\r\n 'valid': True}\r\n{'carParams': {'alternativeExperience': 1,\r\n 'autoResumeSng': True,\r\n 'carFingerprint': 'mock',\r\n 'carFw': [{'address': 2016,\r\n 'brand': 'ford',\r\n 'bus': 1,\r\n 'ecu': 'engine',\r\n 'fwVersion': b'PZ6A-14C204-JE\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 2024,\r\n 'subAddress': 0},\r\n {'address': 1840,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'eps',\r\n 'fwVersion': b'NZ6C-14D003-AL\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1848,\r\n 'subAddress': 0},\r\n {'address': 1888,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'abs',\r\n 'fwVersion': b'PZ6C-2D053-ED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1896,\r\n 'subAddress': 0},\r\n {'address': 1798,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'fwdCamera',\r\n 'fwVersion': b'NZ6T-14F397-AC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1806,\r\n 'subAddress': 0},\r\n {'address': 1842,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'shiftByWire',\r\n 'fwVersion': b'NZ6P-14G395-AD\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1850,\r\n 'subAddress': 0},\r\n {'address': 1892,\r\n 'brand': 'ford',\r\n 'bus': 0,\r\n 'ecu': 'fwdRadar',\r\n 'fwVersion': b'NZ6T-14D049-AA\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'>\\x00', b'\"\\xf1\\x88'],\r\n 'responseAddress': 1900,\r\n 'subAddress': 0},\r\n {'address': 2016,\r\n 'brand': 'mazda',\r\n 'bus': 1,\r\n 'ecu': 'engine',\r\n 'fwVersion': b'PZ6A-14C204-JE\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': False,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 2024,\r\n 'subAddress': 0},\r\n {'address': 1840,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'eps',\r\n 'fwVersion': b'NZ6C-14D003-AL\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1848,\r\n 'subAddress': 0},\r\n {'address': 1888,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'abs',\r\n 'fwVersion': b'PZ6C-2D053-ED\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1896,\r\n 'subAddress': 0},\r\n {'address': 1798,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'fwdCamera',\r\n 'fwVersion': b'NZ6T-14F397-AC\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1806,\r\n 'subAddress': 0},\r\n {'address': 1892,\r\n 'brand': 'mazda',\r\n 'bus': 0,\r\n 'ecu': 'fwdRadar',\r\n 'fwVersion': b'NZ6T-14D049-AA\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00',\r\n 'logging': True,\r\n 'obdMultiplexing': True,\r\n 'request': [b'\"\\xf1\\x88'],\r\n 'responseAddress': 1900,\r\n 'subAddress': 0}],\r\n 'carName': 'mock',\r\n 'carVin': '3FTTW8E31PRA79783',\r\n 'centerToFront': 1.350000023841858,\r\n 'communityFeatureDEPRECATED': False,\r\n 'dashcamOnly': False,\r\n 'directAccelControlDEPRECATED': False,\r\n 'enableApgsDEPRECATED': False,\r\n 'enableBsm': False,\r\n 'enableCameraDEPRECATED': False,\r\n 'enableDsu': False,\r\n 'enableGasInterceptor': False,\r\n 'experimentalLongitudinalAvailable': False,\r\n 'fingerprintSource': 'can',\r\n 'flags': 0,\r\n 'fuzzyFingerprint': False,\r\n 'hasStockCameraDEPRECATED': False,\r\n 'isPandaBlackDEPRECATED': False,\r\n 'lateralTuning': {'pid': {'kf': 0.0}},\r\n 'longitudinalActuatorDelayLowerBound': 0.15000000596046448,\r\n 'longitudinalActuatorDelayUpperBound': 0.15000000596046448,\r\n 'longitudinalTuning': {'deadzoneBP': [0.0], 'deadzoneV': [0.0], 'kf': 1.0, 'kiBP': [0.0], 'kiV': [1.0], 'kpBP': [0.0], 'kpV': [1.0]},\r\n 'mass': 1836.0,\r\n 'maxLateralAccel': 10.0,\r\n 'maxSteeringAngleDegDEPRECATED': 0.0,\r\n 'minEnableSpeed': -1.0,\r\n 'minSpeedCanDEPRECATED': 0.0,\r\n 'minSteerSpeed': 0.0,\r\n 'networkLocation': 'fwdCamera',\r\n 'notCar': False,\r\n 'openpilotLongitudinalControl': False,\r\n 'pcmCruise': True,\r\n 'radarTimeStep': 0.05000000074505806,\r\n 'radarUnavailable': False,\r\n 'rotationalInertia': 3139.534912109375,\r\n 'safetyConfigs': [{'safetyModel': 'noOutput', 'safetyParam': 0, 'safetyParam2DEPRECATED': 0, 'safetyParamDEPRECATED': 0}],\r\n 'safetyModelDEPRECATED': 'silent',\r\n 'safetyModelPassiveDEPRECATED': 'silent',\r\n 'safetyParamDEPRECATED': 0,\r\n 'startAccel': 0.0,\r\n 'startingAccelRateDEPRECATED': 0.0,\r\n 'startingState': False,\r\n 'steerActuatorDelay': 0.0,\r\n 'steerControlType': 'torque',\r\n 'steerLimitAlert': False,\r\n 'steerLimitTimer': 1.0,\r\n 'steerRateCostDEPRECATED': 0.0,\r\n 'steerRatio': 13.0,\r\n 'steerRatioRear': 0.0,\r\n 'stopAccel': -2.0,\r\n 'stoppingControl': True,\r\n 'stoppingDecelRate': 0.800000011920929,\r\n 'tireStiffnessFactor': 1.0,\r\n 'tireStiffnessFront': 201087.203125,\r\n 'tireStiffnessRear': 317877.90625,\r\n 'transmissionType': 'unknown',\r\n 'vEgoStarting': 0.5,\r\n 'vEgoStopping': 0.5,\r\n 'wheelSpeedFactor': 1.0,\r\n 'wheelbase': 2.700000047683716},\r\n 'logMonoTime': 1021914306894,\r\n 'valid': True}`", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ce9559cc54433244cb01d4781302eb072a3fd519', 'files': []}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "psf", "repo_name": "requests", "base_commit": "27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66", "iss_html_url": "https://github.com/psf/requests/issues/775", "iss_label": "", "title": "Content marked as consumed in 0.13.6", "body": "Content is immediately marked as consumed in 0.13.6, causing calls to e.g. response.iter_content() to throw an error.\n\nTest code (tested with python 2.6):\n\n```\nimport requests\n\nr = requests.get('http://docs.python-requests.org/')\nif r._content_consumed:\n print 'consumed'\nelse:\n print 'not consumed'\n```\n\nIn 0.13.5 this prints:\nnot consumed\n\nIn 0.13.6 this prints:\nconsumed\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '27b55a74d7b9bd2f8c60fd0ee342bcbbf40e0a66', 'files': [{'path': 'requests/models.py', 'Loc': {\"('Request', '__init__', 47)\": {'mod': [62]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["requests/models.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "psf", "repo_name": "requests", "base_commit": "2de907ad778de270911acaffe93883f0e2729a4a", "iss_html_url": "https://github.com/psf/requests/issues/4602", "iss_label": "", "title": "Chunk-encoded request doesn't recognize iter_content generator", "body": "Passing a generator created by iter_content() as request data raises \"TypeError: sendall() argument 1 must be string or buffer, not generator\".\r\n\r\n## Expected Result\r\n\r\nThe POST request successfully delives the content from the GET request.\r\n\r\n## Actual Result\r\n\r\nA TypeError is raised:\r\n```\r\nTraceback (most recent call last):\r\n File \"..\\test.py\", line 7, in <module>\r\n PostForward(\"http://myhost/img/foo.png\", \"http://myotherhost/convert\")\r\n File \"..\\test.py\", line 6, in PostForward\r\n return requests.post(url=dst, data=data, headers={'Content-Length': length})\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\api.py\", line 112, in post\r\n return request('post', url, data=data, json=json, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\api.py\", line 58, in request\r\n return session.request(method=method, url=url, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 508, in request\r\n resp = self.send(prep, **send_kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\sessions.py\", line 618, in send\r\n r = adapter.send(request, **kwargs)\r\n File \"C:\\Python27\\lib\\site-packages\\requests\\adapters.py\", line 440, in send\r\n timeout=timeout\r\n File \"C:\\Python27\\lib\\site-packages\\urllib3\\connectionpool.py\", line 601, in urlopen\r\n chunked=chunked)\r\n File \"C:\\Python27\\lib\\site-packages\\urllib3\\connectionpool.py\", line 357, in _make_request\r\n conn.request(method, url, **httplib_request_kw)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1042, in request\r\n self._send_request(method, url, body, headers)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1082, in _send_request\r\n self.endheaders(body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 1038, in endheaders\r\n self._send_output(message_body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 886, in _send_output\r\n self.send(message_body)\r\n File \"C:\\Python27\\lib\\httplib.py\", line 858, in send\r\n self.sock.sendall(data)\r\n File \"C:\\Python27\\lib\\socket.py\", line 228, in meth\r\n return getattr(self._sock,name)(*args)\r\nTypeError: sendall() argument 1 must be string or buffer, not generator\r\n```\r\n\r\n## Reproduction Steps\r\n\r\n```python\r\nimport requests\r\ndef PostForward(src, dst):\r\n\twith requests.get(url=src, stream=True) as srcResponse:\r\n\t\tlength = srcResponse.headers['Content-Length']\r\n\t\tdata = srcResponse.iter_content(1024)\r\n\t\treturn requests.post(url=dst, data=data, headers={'Content-Length': length})\r\nPostForward(\"http://myhost/img/foo.png\", \"http://myotherhost/convert\")\r\n```\r\n\r\n## System Information\r\n\r\n $ python -m requests.help\r\n\r\n```\r\n{\r\n \"chardet\": {\r\n \"version\": \"3.0.4\"\r\n },\r\n \"cryptography\": {\r\n \"version\": \"\"\r\n },\r\n \"idna\": {\r\n \"version\": \"2.6\"\r\n },\r\n \"implementation\": {\r\n \"name\": \"CPython\",\r\n \"version\": \"2.7.14\"\r\n },\r\n \"platform\": {\r\n \"release\": \"10\",\r\n \"system\": \"Windows\"\r\n },\r\n \"pyOpenSSL\": {\r\n \"openssl_version\": \"\",\r\n \"version\": null\r\n },\r\n \"requests\": {\r\n \"version\": \"2.18.4\"\r\n },\r\n \"system_ssl\": {\r\n \"version\": \"100020bf\"\r\n },\r\n \"urllib3\": {\r\n \"version\": \"1.22\"\r\n },\r\n \"using_pyopenssl\": false\r\n}\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "requests"}, {"pro": "toolbelt", "path": ["requests_toolbelt/streaming_iterator.py"]}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["requests_toolbelt/streaming_iterator.py"], "doc": [], "test": [], "config": [], "asset": ["requests"]}} +{"organization": "psf", "repo_name": "requests", "base_commit": "f17ef753d2c1f4db0d7f5aec51261da1db20d611", "iss_html_url": "https://github.com/psf/requests/issues/3031", "iss_label": "Needs Info\nQuestion/Not a bug", "title": "[WinError 10048] Only one usage of each socket address ...", "body": "I notice that despite using requests.Session() - I still seem to be creating new connections/sockets which eventually exhaust (TIME_WAIT) and I get the following error:\n\n> [WinError 10048] Only one usage of each socket address (protocol/network address/port) is normally permitted',))\n\n```\ns = requests.Session()\ndata = zip(url_routes, cycle(s))\ncalc_routes = pool.map(processRequest, data)\n\n```\n\nI posted a bit more [here](http://stackoverflow.com/questions/35793908/python-multiprocessing-associate-a-process-with-a-session), however not sure how to address this\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [8], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "psf", "repo_name": "requests", "base_commit": "6f659a41794045292b836859f1281d33eeed8260", "iss_html_url": "https://github.com/psf/requests/issues/3740", "iss_label": "", "title": "File download weirdness", "body": "I noticed this issue building conda recipes. Conda uses requests to download files from the internet.\r\n\r\nThe file that is being fetched is: https://dakota.sandia.gov/sites/default/files/distributions/public/dakota-6.5-public.src.tar.gz\r\n(link found here: https://dakota.sandia.gov/download.html)\r\n\r\nDownloading with curl -O\r\nfilesize: 78MB\r\nmd5: 02c46e904d40bba6b308065db34c1ad7\r\n\r\nDownloading with urllib2 (from the standard library):\r\nfilesize: 78MB\r\nmd5: 02c46e904d40bba6b308065db34c1ad7\r\n\r\nDownloading with requests-2.12.1 (supplied with conda)\r\nfilesize: 248MB\r\nmd5: 41e4268140d850756812510512d8eee8\r\ntar -tf doesn't indicate any corruption.\r\n\r\nI'm not sure what is different with this particular URL, but the other files I tried with requests worked. I don't know where the extra 170MB is coming from?\r\n\r\ncode used to download files:\r\n```python\r\ndef download_file(url, fn):\r\n r = requests.get(url, stream=True)\r\n with open(fn, 'wb') as f:\r\n for chunk in r.iter_content(chunk_size=1024): \r\n if chunk:\r\n f.write(chunk)\r\n\r\ndef download_urllib2(url, fn):\r\n f = urllib2.urlopen(url)\r\n with open(fn, 'wb') as fh:\r\n for x in iter(lambda: f.read(1024), b''):\r\n fh.write(x)\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6f659a41794045292b836859f1281d33eeed8260', 'files': [{'path': 'docs/user/quickstart.rst', 'Loc': {'(None, None, 166)': {'mod': [166]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/user/quickstart.rst"], "test": [], "config": [], "asset": []}} +{"organization": "psf", "repo_name": "requests", "base_commit": "62176a1ca7207db37273365b4691ed599203b828", "iss_html_url": "https://github.com/psf/requests/issues/3849", "iss_label": "", "title": "Received response with content-encoding: gzip, but failed to decode it", "body": "```python\r\nimport requests\r\n\r\nrequests.get('http://gett.bike/')\r\n```\r\nThis code raises the following exception:\r\n```python\r\nContentDecodingError: ('Received response with content-encoding: gzip, but failed to decode it.',\r\nerror('Error -3 while decompressing data: incorrect data check',))\r\n```\r\nArch linux x64\r\nrequests==2.13.0\r\npython=3.6.0", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '62176a1ca7207db37273365b4691ed599203b828', 'files': [{'path': 'src/requests/api.py', 'Loc': {\"(None, 'request', 14)\": {'mod': [24]}}, 'status': 'modified'}, {'Loc': [4], 'path': None}]}", "own_code_loc": [{"Loc": [4], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "src/requests/api.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "psf", "repo_name": "requests", "base_commit": "057722af23edf3f69bf7bdfed7c6c32cbe1ce2e7", "iss_html_url": "https://github.com/psf/requests/issues/3015", "iss_label": "", "title": "Ability to set timeout after response", "body": "For devs who use this great library, it would be very beneficial to be able to set the timeout AFTER initial connection. There are a few scenarios where this is useful but one of the main patterns/use cases is this:\n\n```\n\nimport requests\nimport socket\n\n# May or may not subclass threading.Thread\nclass Getter(object):\n def __init__(self):\n self.request = requests.get(url, stream=True)\n\n def run(self):\n with open(path, 'r+b') as file:\n\n bytes_consumed = 0\n while True:\n try:\n\n chunk = self.request.raw.read(size)\n if not chunk:\n break\n chunk_length = len(chunk)\n\n file.write(chunk)\n bytes_consumed += chunk_length\n\n except socket.timeout:\n # handle incomplete download by using range header next time, etc.\n```\n\nHandling incomplete downloads due to connection loss is common and especially important when downloading large or many files (or both). As you can see, this can be achieved in a fairly straightforward way. The issue is there is really no good way to write tests for this. Each method would involve OS specific code which would also be a no-go for CI services.\n\nWhat would be an option is the ability to set the timeout after establishing a connection. This way in a test you could do \"r.timeout = (None, 0.00001)\" and during reading it would simulate a timeout.\n\nTo my knowledge this is no way currently to inject a new Timeout class retroactively. Is this correct?\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "psf", "repo_name": "requests", "base_commit": "1285f576ae0a848de27af10d917c19b60940d1fa", "iss_html_url": "https://github.com/psf/requests/issues/3774", "iss_label": "", "title": "bad handshake error with ssl3", "body": "I have an inhouse IIS server with ssl3 but an expired certificate, so I used requests without certificate verification and it was working fine with requests 2.11.1. But after I upgrade requests to 2.12.0, there was an error occured. \r\n\r\nthe code is:\r\n...\r\nrequests.get('https://10.192.8.89:8080/yps_report', verify=False)\r\n...\r\n\r\nerror message:\r\nTraceback (most recent call last):\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\contrib\\pyopenssl.py\", line 417, in wrap_socket\r\n cnx.do_handshake()\r\n File \"c:\\python35\\lib\\site-packages\\OpenSSL\\SSL.py\", line 1426, in do_handshake\r\n self._raise_ssl_error(self._ssl, result)\r\n File \"c:\\python35\\lib\\site-packages\\OpenSSL\\SSL.py\", line 1167, in _raise_ssl_error\r\n raise SysCallError(-1, \"Unexpected EOF\")\r\nOpenSSL.SSL.SysCallError: (-1, 'Unexpected EOF')\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 594, in urlopen\r\n chunked=chunked)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 350, in _make_request\r\n self._validate_conn(conn)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connectionpool.py\", line 835, in _validate_conn\r\n conn.connect()\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\connection.py\", line 323, in connect\r\n ssl_context=context)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\util\\ssl_.py\", line 324, in ssl_wrap_socket\r\n return context.wrap_socket(sock, server_hostname=server_hostname)\r\n File \"c:\\python35\\lib\\site-packages\\requests\\packages\\urllib3\\contrib\\pyopenssl.py\", line 424, in wrap_socket\r\n raise ssl.SSLError('bad handshake: %r' % e)\r\nssl.SSLError: (\"bad handshake: SysCallError(-1, 'Unexpected EOF')\",)\r\n...\r\n\r\nI tried to downgrade requests to 2.11.1 and the error was gone. I have no idea how to fix this.\r\nfrom requests.adapters import HTTPAdapter\nfrom requests.packages.urllib3.util.ssl_ import create_urllib3_context\n\n# This is the 2.11 Requests cipher string.\nCIPHERS = (\n 'ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:ECDH+HIGH:'\n 'DH+HIGH:ECDH+3DES:DH+3DES:RSA+AESGCM:RSA+AES:RSA+HIGH:RSA+3DES:!aNULL:'\n '!eNULL:!MD5'\n)\n\nclass DESAdapter(HTTPAdapter):\n def init_poolmanager(self, *args, **kwargs):\n context = create_urllib3_context(ciphers=CIPHERS)\n kwargs['ssl_context'] = context\n return super(HTTPAdapter, self).init_poolmanager(*args, **kwargs)\n\ns = requests.Session()\ns.mount('https://10.192.8.89', DESAdapter())", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [41], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\n\u9700\u8981\u5c06\u4e0b\u9762\u7684user\u7684\u4e00\u4e2acomment\u4e2duser\u7684\u4ee3\u7801\u653e\u5165\u5176\u4e2d", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "a6d4c3ff7cf43c24be6622102cee834fc5096496", "iss_html_url": "https://github.com/ansible/ansible/issues/78759", "iss_label": "module\nsupport:core\nbug\naffects_2.9", "title": "\"Invalid data passed to 'loop', it requires a list, got this instead: <built-in method values of dict object at 0x7f63b782bf80>.", "body": "### Summary\r\n\r\nWhen trying to pass a variable called i.e. sysctl.values to loop, I will get the above error.\r\n\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ndebug (only used for debugging)\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\nansible 2.9.27\r\n config file = /home/rf/.ansible.cfg\r\n configured module search path = ['/home/rf/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/lib/python3.10/site-packages/ansible\r\n executable location = /usr/bin/ansible\r\n python version = 3.10.6 (main, Aug 2 2022, 00:00:00) [GCC 11.3.1 20220421 (Red Hat 11.3.1-2)]\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\n[I] </m/d/playground>-2-> ansible-config dump --only-changed\r\nANSIBLE_PIPELINING(/home/rf/.ansible.cfg) = True\r\nANSIBLE_SSH_ARGS(/home/rf/.ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s\r\nDEFAULT_FORKS(/home/rf/.ansible.cfg) = 50\r\nDEFAULT_HOST_LIST(/home/rf/.ansible.cfg) = ['/home/rf/hosts']\r\nINVENTORY_CACHE_ENABLED(/home/rf/.ansible.cfg) = True\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nFedora 36\r\n\r\n### Steps to Reproduce\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml (paste below)\r\n- name: Test\r\n hosts: localhost\r\n gather_facts: True\r\n tasks:\r\n - debug:\r\n msg: \"{{ item }}\"\r\n loop: \"{{ sysctl2 }}\"\r\n - debug:\r\n msg: \"{{ item }}\"\r\n loop: \"{{ sysctl.values }}\"\r\n vars:\r\n sysctl:\r\n values:\r\n - { name: \"net.ipv4.ip_forward\", value: \"1\" }\r\n sysctl2:\r\n - { name: \"net.ipv4.ip_forward\", value: \"1\" }\r\n```\r\n\r\n\r\n\r\n\r\n### Expected Results\r\n\r\nOutput of debug using sysctl.values\r\n\r\n### Actual Results\r\n\r\n```console\r\nPLAY [Test] ********************************************************************************************************************************************************************************************\r\n\r\nTASK [Gathering Facts] *********************************************************************************************************************************************************************************\r\nok: [localhost]\r\n\r\nTASK [debug] *******************************************************************************************************************************************************************************************\r\nok: [localhost] => (item={'name': 'net.ipv4.ip_forward', 'value': '1'}) => {\r\n \"msg\": {\r\n \"name\": \"net.ipv4.ip_forward\",\r\n \"value\": \"1\"\r\n }\r\n}\r\n\r\nTASK [debug] *******************************************************************************************************************************************************************************************\r\nfatal: [localhost]: FAILED! => {\"msg\": \"Invalid data passed to 'loop', it requires a list, got this instead: <built-in method values of dict object at 0x7f63b782bf80>. Hint: If you passed a list/dict of just one element, try adding wantlist=True to your lookup invocation or use q/query instead of lookup.\"}\r\n\r\nPLAY RECAP *********************************************************************************************************************************************************************************************\r\nlocalhost : ok=2 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0\r\n```\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [59], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "8af920c8924b2fd9a0e4192c3c7e6085b687bfdc", "iss_html_url": "https://github.com/ansible/ansible/issues/82382", "iss_label": "bug\naffects_2.16", "title": "Ansible core 2.16.1 broke AnsibleUnsafeBytes iteration", "body": "### Summary\r\n\r\nUpgrading form 2.16.0 to 2.16.1 (Ansible 9.0.1 to 9.1.0), iterating over AnsibleUnsafeBytes does not create a list of numbers anymore.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\ncore, unsafe_proxy\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\n\r\n\r\nansible [core 2.16.1]\r\n config file = /etc/ansible/ansible.cfg\r\n configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']\r\n ansible python module location = /usr/local/lib/python3.12/site-packages/ansible\r\n ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections\r\n executable location = /usr/local/bin/ansible\r\n python version = 3.12.0 (main, Nov 29 2023, 03:32:06) [GCC 10.2.1 20210110] (/usr/local/bin/python)\r\n jinja version = 3.1.2\r\n libyaml = True\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console\r\n# if using a version older than ansible-core 2.12 you should omit the '-t all'\r\n$ ansible-config dump --only-changed -t all\r\n\r\n\r\n/bin/sh: 1: less: not found\r\n```\r\n\r\n(sorry, dockerized environment)\r\n\r\n\r\n### OS / Environment\r\n\r\nDebian bullseye / 11 (in python docker image: `python:3.12.0-bullseye`), ansible via pip (`ansible==9.1.0`)\r\n\r\n### Steps to Reproduce\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```py\r\nfrom ansible.utils.unsafe_proxy import AnsibleUnsafeText \r\nx = AnsibleUnsafeText(\"asdf\")\r\ny = x.encode(\"utf8\")\r\nlist(y)\r\n```\r\n\r\n### Expected Results\r\n\r\n```\r\n[97, 115, 100, 102]\r\n```\r\n\r\nThis is what happens on 2.16.0.\r\n\r\n### Actual Results\r\n\r\n```console\r\n[b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00', b'\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00\\x00']\r\n```\r\n\r\n\r\n### Code of Conduct\r\n\r\n- [X] I agree to follow the Ansible Code of Conduct", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8af920c8924b2fd9a0e4192c3c7e6085b687bfdc', 'files': [{'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Other"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "bcf9cd1e2a01d8e111a28db157ebc255a5592dca", "iss_html_url": "https://github.com/ansible/ansible/issues/20085", "iss_label": "cloud\naffects_2.1\nmodule\ndocker\nbug", "title": "docker_container task fail on exit code", "body": "Unless i'm missing something i expect that if I were to do something like the following the task would fail? But it does not \ud83d\ude1f \r\n\r\n```yaml\r\n tasks:\r\n docker_container:\r\n name: \"exit-test\"\r\n image: \"ubuntu:latest\"\r\n command: \"bash -c 'exit 123'\"\r\n```\r\n\r\n##### ISSUE TYPE\r\n - Bug Report\r\n\r\n##### COMPONENT NAME\r\ndocker_container\r\n\r\n##### ANSIBLE VERSION\r\n```\r\n2.1.1.0\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nN/A\r\n\r\n##### STEPS TO REPRODUCE\r\n```yaml\r\n tasks:\r\n docker_container:\r\n name: \"exit-test\"\r\n image: \"ubuntu:latest\"\r\n command: \"bash -c 'exit 123'\"\r\n```\r\n##### EXPECTED RESULTS\r\nShould fail the task\r\n\r\n##### ACTUAL RESULTS\r\nTask is ok.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ansible", "pro": "ansible-modules-core", "path": ["cloud/docker/docker_container.py"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["cloud/docker/docker_container.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "d5324c11a0c389d2ede8375e2024cb37b9eb8ce5", "iss_html_url": "https://github.com/ansible/ansible/issues/19352", "iss_label": "affects_2.0\nmodule\nsupport:core\nbug\nfiles", "title": "Template update convert \\n to actual new line", "body": "##### ISSUE TYPE\r\n\r\n Bug Report\r\n\r\n##### COMPONENT NAME\r\n\r\ntemplate\r\n\r\n##### ANSIBLE VERSION\r\n\r\n2.0 and higher\r\nCONFIGURATION\r\n```\r\n[ssh_connection]\r\ncontrol_path = %(directory)s/%%C\r\n```\r\n##### OS / ENVIRONMENT\r\n\r\nMac OS X 10.11.6\r\nCentos 6.x, 7.x\r\nSUMMARY\r\n\r\nIn the input .j2 file, we substitute a variable with an environment variable that has a line/string that contains a grok expression containing `(?m)\\n` . The output generated by the template module in versions 2.0 and later, treats the \\n as actual line break. Where as versions up to 1.9.6 retains the literal `(?m)\\n` without replacing the \\n with an actual line break. We see the line break after we upgraded the Ansible version to 2.x.\r\n\r\nAny way we can work around this issue? Thank you for your help.\r\n##### STEPS TO REPRODUCE\r\n\r\nOur execution flow is probably not the nicest - we want to reengineer it soon. Basic steps:\r\n\r\n Run a shell script with ansible-playbook command that pass in an env variable with `(?m)\\n` literal.\r\n Playbook calls a main yaml file and assigns shell environment var to a included task yaml file.\r\n The task yaml file invokes the template module.\r\n\r\nIn the snippet below I stripped out other lines/vars for clarity.\r\n\r\nmain shell\r\n```\r\nset GROK_PATTERN_GENERAL_ERROR_PG=\"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n```\r\n```\r\nansible-playbook -i ../common/host.inventory \\\r\n -${VERBOSE} \\\r\n t.yml \\\r\n ${CHECK_ONLY} \\\r\n --extra-vars \"hosts='${HOST}'\r\n xlogstash_grok_general_error='${GROK_PATTERN_GENERAL_ERROR_PG}'\r\n \"\r\n```\r\nt.yml\r\n```\r\n---\r\n- hosts: 127.0.0.1\r\n connection: local\r\n\r\n tasks:\r\n - include_vars: ../common/defaults/main.yml\r\n - name: generate logstash kafka logscan filter config file\r\n include: tasks/t.yml\r\n vars:\r\n logstash_grok_general_error: \"{{xlogstash_grok_general_error}}\"\r\n```\r\ntasks/t.yml\r\n```\r\n---\r\n - name: generate logstash kafka logscan filter config file\r\n template: src=../common/templates/my.conf.j2\r\n dest=\"./500-filter.conf\"\r\n```\r\nmy.conf.j2\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"{{logstash_grok_general_error}}\"\r\n ]\r\n }\r\n```\r\nNote the `(?m)\\n` are still on the same line.\r\n##### EXPECTED RESULTS\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```\r\n##### ACTUAL RESULTS\r\n\r\nNote `(?m)\\n` now has the `\\n` as actual line break.\r\n```\r\n grok {\r\n break_on_match => \"true\"\r\n match => [\r\n \"message\", \"%{TIMESTAMP_ISO8601} ERROR \\[%{USER:handlerName}\\] %{USER:className}%{GREEDYDATA:errorline1}((?m)\r\n%{USER:logerror}%{GREEDYDATA})\"\r\n ]\r\n }\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd5324c11a0c389d2ede8375e2024cb37b9eb8ce5', 'files': [{'path': 'lib/ansible/template/__init__.py', 'Loc': {}}, {'path': 't.yml', 'Loc': [60]}]}", "own_code_loc": [{"path": "t.yml", "Loc": [60]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3\n+\n0", "info_type": "Code"}, "loctype": {"code": ["lib/ansible/template/__init__.py"], "doc": [], "test": [], "config": ["t.yml"], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7", "iss_html_url": "https://github.com/ansible/ansible/issues/73922", "iss_label": "python3\nmodule\nsupport:core\nbug\naffects_2.10", "title": "cron: Remove/delete an environment variable", "body": "### Summary\r\n\r\nWith `env=yes`, `cron` add environment variable (with the `name` & `value`) parameters.\r\nI though that having `env` + `state=absent` would remove said variable, but that's not the case (the cron file is actually removed).\r\nAs such there is no way to remove a variable and the more obvious way to attempt to do it results in a surprising result.\r\n\r\n### Issue Type\r\n\r\nBug Report\r\n\r\n### Component Name\r\n\r\nansible.builtin.cron\r\n\r\n### Ansible Version\r\n\r\n```console\r\n$ ansible --version\r\nansible 2.10.5\r\n config file = /home/user/.ansible.cfg\r\n configured module search path = ['/usr/share/ansible']\r\n ansible python module location = /home/user/.local/lib/python3.8/site-packages/ansible\r\n executable location = /home/user/.local/bin/ansible\r\n python version = 3.8.5 (default, Jan 27 2021, 15:41:15) [GCC 9.3.0]\r\n\r\n```\r\n\r\n\r\n### Configuration\r\n\r\n```console (paste below)\r\n$ ansible-config dump --only-changed\r\n\r\n```\r\n\r\n\r\n### OS / Environment\r\n\r\nUbuntu 20.04\r\n\r\n### Steps to Reproduce\r\n\r\n```yaml\r\n cron:\r\n cron_file: foobar\r\n user: root\r\n env: yes\r\n name: \"VAR\"\r\n value: \"False\"\r\n state: absent\r\n```\r\n\r\n\r\n### Expected Results\r\n\r\nThe \"VAR\" variable is removed from /etc/cron.d/foobar\r\n\r\n### Actual Results\r\n\r\n/etc/cron.d/foobar is removed.\r\nThere is no way to remove the \"VAR\" variable.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a29fcfa9952ff40e389a5e93c880bc2a23e3f2e7', 'files': [{'path': 'lib/ansible/modules/cron.py', 'Loc': {'(None, None, None)': {'mod': [15]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["lib/ansible/modules/cron.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "7490044bbe28029afa9e3099d86eae9fda5f88b7", "iss_html_url": "https://github.com/ansible/ansible/issues/11351", "iss_label": "affects_2.0\naffects_2.3\nc:executor/playbook_executor\nsupport:core\nfeature\nP3", "title": "enable do/until with async tasks", "body": "##### ISSUE TYPE\nFeature Idea\n\n##### COMPONENT NAME\ncore\n\n##### ANSIBLE VERSION\n2.0\n\n##### CONFIGURATION\n\n\n##### OS / ENVIRONMENT\n\n\n##### SUMMARY\nWhen a task is marked as async, there is no way to loop until a condition is met.\nWith poll:0 and async_status you can poll for async task to complete but you cannot repeat the original async task itself until a condition is met.\n\n```\ncat /tmp/async-test.yml \n\n---\n# Run through the test of an async command\n\n- hosts: all\n tasks:\n - name: \"Check an async command\"\n command: /bin/sleep 3\n async: 5\n poll: 1\n register: command_result\n until: command_result.failed\n retries: 5\n delay: 10\n```\n\n```\n$ansible-playbook -i localhost, /tmp/async-test.yml \n ____________\n< PLAY [all] >\n ------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\n _________________\n< GATHERING FACTS >\n -----------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\nok: [localhost]\n ______________________________\n< TASK: Check an async command >\n ------------------------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\nfatal: [localhost] => error while evaluating conditional: command_result.failed: {% if command_result.failed %} True {% else %} False {% endif %}\n\nFATAL: all hosts have already failed -- aborting\n ____________\n< PLAY RECAP >\n ------------\n \\ ^__^\n \\ (oo)\\_______\n (__)\\ )\\/\\\n ||----w |\n || ||\n\n\n to retry, use: --limit @/opt/ashishkh/async-test.retry\n\nlocalhost : ok=1 changed=0 unreachable=2 failed=0 \n```\n\n\n##### STEPS TO REPRODUCE\n\n\n##### EXPECTED RESULTS\n\n\n##### ACTUAL RESULTS\n\n\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"path": "/tmp/async-test.yml", "Loc": [33]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["/tmp/async-test.yml"], "asset": []}} +{"organization": "ansible", "repo_name": "ansible", "base_commit": "833970483100bfe89123a5718606234115921aec", "iss_html_url": "https://github.com/ansible/ansible/issues/67993", "iss_label": "cloud\naws\nopenstack\nmodule\nsupport:community\naffects_2.5\nbug\ntraceback\nsystem", "title": "Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol(unable to disable stickiness not supported in NLB)", "body": "##### SUMMARY\r\nWe are using Ansible 2.5 to deploy AWS resources in our environment. From March 02, 2019 our deployment is failing with the below error.\r\n\r\nERROR:\r\n=====\r\nTASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***\r\nAn exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:\r\nAn error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation: \r\nStickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\r\n17:21:08 fatal: [localhost]: FAILED! => {\"changed\": false, \"error\": {\"code\": \"InvalidConfigurationRequest\", \"message\": \"Stickiness type 'lb_cookie'\r\nis not supported for target groups with the TCP protocol\", \"type\": \"Sender\"}, \"msg\": \"An error occurred (InvalidConfigurationRequest) \r\nwhen calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\", \r\n\"response_metadata\": {\"http_headers\": {\"connection\": \"close\", \"content-length\": \"359\", \"content-type\": \"text/xml\", \"date\": \"Tue, 03 Mar 2020 11:51:08 GMT\", \r\n\"x-amzn-requestid\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\"}, \"http_status_code\": 400, \"request_id\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\", \"retry_attempts\": 0}}\r\n\r\n##### ISSUE TYPE\r\n- Bug Report - Unable to disable stickiness not supported in NLB\r\n\r\n##### COMPONENT NAME\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n\r\n##### ANSIBLE VERSION\r\n```paste below\r\nAnsible version = 2.5.0\r\n```\r\n\r\n##### CONFIGURATION\r\n<!--- Paste verbatim output from \"ansible-config dump --only-changed\" between quotes -->\r\n```paste below\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n\r\n```\r\n\r\n##### OS / ENVIRONMENT\r\nUbuntu 18.04 LTS / AWS environment\r\n\r\n\r\n##### STEPS TO REPRODUCE\r\nKindly use the below playbook to deploy loadbalancer using Ansible on AWS cloud.\r\n\r\n<!--- Paste example playbooks or commands between quotes below -->\r\n```yaml\r\n- name: \"target group for {{ server_name }} loadbalancer\"\r\n elb_target_group:\r\n state: present\r\n name: \"{{ server_name }}-elb\"\r\n protocol: tcp\r\n port: 80\r\n target_type: instance\r\n deregistration_delay_timeout: 35\r\n modify_targets: False\r\n vpc_id: \"{{ vpc_out.vpcs.0.id }}\"\r\n health_check_protocol: \"{{ load_balancer_ping_protocol | default('http') }}\"\r\n health_check_port: \"{{ load_balancer_ping_port | default('80') }}\"\r\n health_check_path: \"{{ load_balancer_ping_path | default('/elb/ping')}}\"\r\n health_check_interval: 30\r\n unhealthy_threshold_count: 2\r\n healthy_threshold_count: 2\r\n stickiness_enabled: False\r\n tags: \"{{ aws.tags_as_dict }}\"\r\n register: target_group_out\r\n```\r\n\r\n<!--- HINT: You can paste gist.github.com links for larger files -->\r\n\r\n##### EXPECTED RESULTS\r\nAn AWS Network loadbalancer will be created.\r\n\r\n\r\n##### ACTUAL RESULTS\r\nThe deployment fails with below error.\r\n\r\n<!--- Paste verbatim command output between quotes -->\r\n```paste below\r\n TASK [immutable_server : target group for analytics-tst-plebos loadbalancer] ***\r\n17:21:08 An exception occurred during task execution. To see the full traceback, use -vvv. The error was: InvalidConfigurationRequestException:\r\nAn error occurred (InvalidConfigurationRequest) when calling the ModifyTargetGroupAttributes operation: \r\nStickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\r\n17:21:08 fatal: [localhost]: FAILED! => {\"changed\": false, \"error\": {\"code\": \"InvalidConfigurationRequest\", \"message\": \"Stickiness type 'lb_cookie'\r\nis not supported for target groups with the TCP protocol\", \"type\": \"Sender\"}, \"msg\": \"An error occurred (InvalidConfigurationRequest) \r\nwhen calling the ModifyTargetGroupAttributes operation: Stickiness type 'lb_cookie' is not supported for target groups with the TCP protocol\", \r\n\"response_metadata\": {\"http_headers\": {\"connection\": \"close\", \"content-length\": \"359\", \"content-type\": \"text/xml\", \"date\": \"Tue, 03 Mar 2020 11:51:08 GMT\", \r\n\"x-amzn-requestid\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\"}, \"http_status_code\": 400, \"request_id\": \"23b0ca87-e0fb-4b84-b93b-ae5b1363df53\", \"retry_attempts\": 0}}\r\n\r\n```\r\n\r\n##### References\r\nI can see a similar issue occurred for terraform users as well.\r\n\r\nhttps://github.com/terraform-providers/terraform-provider-aws/issues/10494\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "6f718cee740e7cd423edd1136db78c5be49fa7c0", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2467", "iss_label": "question\nStale", "title": "Problems with weights", "body": "## \u2754Question\r\nHello, I have just run trainy.py script with my data and faced a problem - you wrote that weights are saved in runs directory, but in my case I have not found them. Everything is fine with hyp.yaml and opt.yaml but folder \"weights\" is empty. \r\nDo you have any guesses about this issue? \r\n\r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6f718cee740e7cd423edd1136db78c5be49fa7c0', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [470, 454]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nweights\u627e\u4e0d\u89c1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "06831aa9e905e0fa703958f6b3f3db443cf477f3", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/9079", "iss_label": "", "title": "Does adjusting the number of classes of a pretrained model work?", "body": "### Search before asking\r\n\r\n- [x] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions. *\r\n\r\n### Question\r\n\r\nHi everyone,\r\n\r\nI'm a bit confused about how to properly load a pretrained model with an adjusted number of classes for training with a custom dataset.\r\n\r\nOn the [Load YOLOv5 from PyTorch Hub \u2b50](https://github.com/ultralytics/yolov5/issues/36) page you've explained that one can adjust the number of classes in the pretrained model by using the following command. `model = torch.hub.load('ultralytics/yolov5', 'yolov5s', classes=10)`\r\n\r\n<img width=\"999\" alt=\"Bildschirmfoto 2022-08-22 um 08 13 15\" src=\"https://user-images.githubusercontent.com/5917496/185851461-b177aa78-2b56-46a1-9c43-081d2a746938.png\">\r\n\r\nWhen I do so, I can see that a model.yaml file is overwritten, but I do not know where this file is stored. \r\n\r\nNow, what actually confuses me about the number of classes, is that when I try to use this pretrained model in detection, without any further training. I see an error, that the model was trained with nc=80 and my data is incompatible with nc=13:\r\n\r\n`AssertionError: ['yolov5s6.pt'] (80 classes) trained on different --data than what you passed (13 classes). Pass correct combination of --weights and --data that are trained together.`\r\n\r\nI know that I can not expect any proper predictions since the last layers are initialized with random weights, but I was expecting that the model is compatible with the 13 classes dataset.\r\n\r\nIs this behavior to be expected or am I doing something wrong here? \r\nDo I need to find and use the model.yaml file and is the only thing changed in there 'nc=13'?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '06831aa9e905e0fa703958f6b3f3db443cf477f3', 'files': [{'path': 'train.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "ee8988b8a2ed07af1b7c8807d39aad35369f0e28", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/8", "iss_label": "Stale", "title": "training actually can not work", "body": "After trained on several epochs, I found the mAP is still very low. Does the training really works?\r\n\r\n```\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 14/299 6.4G 0.02273 0.002925 0.0003764 0.02603 11 640: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [54:20<00:00, 2.13it/s]\r\n Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [13:37<00:00, 8.51it/s]\r\n all 5.57e+04 1.74e+05 0.000332 0.00039 2.4e-06 8.59e-07\r\n\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 15/299 6.4G 0.02232 0.002874 0.000371 0.02556 7 640: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [54:36<00:00, 2.12it/s]\r\n Class Images Targets P R mAP@.5 mAP@.5:.95: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 6960/6960 [14:23<00:00, 8.06it/s]\r\n all 5.57e+04 1.74e+05 0.000342 0.000401 2.44e-06 8.66e-07\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ee8988b8a2ed07af1b7c8807d39aad35369f0e28', 'files': [{'path': 'models/yolov5s.yaml', 'Loc': {'(None, None, 2)': {'mod': [2]}}, 'status': 'modified'}, {'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": ["models/yolov5s.yaml"], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "901243c7806be07b31073440cf721e73532a0734", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/894", "iss_label": "question", "title": "training stuck when loading dataset", "body": "## \u2754Question\r\nI follow the instructions to run coco128, \r\n```\r\npython train.py --img 640 --batch 16 --epochs 5 --data ./data/coco128.yaml --cfg ./models/yolov5s.yaml --weights '',\r\n```\r\nthe ouput is \r\n```\r\nImage sizes 640 train, 640 test\r\nUsing 8 dataloader workers\r\nStarting training for 5 epochs...\r\n\r\n Epoch gpu_mem GIoU obj cls total targets img_size\r\n 0%| | 0/8 [00:00<?, ?it/s\r\n```\r\nthen it is stuck, I found that it is stucking at loading the dataset, \r\nin https://github.com/ultralytics/yolov5/blob/master/train.py#L244, \r\n```\r\nfor i, (imgs, targets, paths, _) in pbar:\r\n```\r\nit just stops here, could you help me ?\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '901243c7806be07b31073440cf721e73532a0734', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [388]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "63060910a68bfde238872d629ab88e2e7bc736e8", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/3735", "iss_label": "question\nStale", "title": "Results interpretation", "body": "Hello,\r\n\r\nAnother question to do with results interpretation. I am not very sure how to interpret the results.txt file that gets generated after training is over. Also, is there any way to extract the number of false positives, true positives, false negatives, as well as to see the total mean average accuracy and loss (like with yolov4)?\r\n\r\nFurther, after training is done, can the best weights obtained from training be used to test on unseen data (more specifically, multiple images)?\r\n\r\nThanks in advance again!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '63060910a68bfde238872d629ab88e2e7bc736e8', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "dc54ed5763720ced4f6784552c47534af5413d45", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/6062", "iss_label": "question\nStale", "title": "How to add some private information into .pt file?", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nyolov5 is a great algorithm, but I'm having some problems. Specifically, I want to add some private information to the .pt file, can this be done?\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dc54ed5763720ced4f6784552c47534af5413d45', 'files': [{'path': 'train.py', 'Loc': {\"(None, 'train', 58)\": {'mod': [377]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "79af1144c270ac7169553d450b9170f9c60f92e4", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4517", "iss_label": "question\nStale", "title": "what is moasic and what is its default and how to delete it", "body": "what is the meaning of moasic\r\n\r\nwhere I can find its default parameter\r\n\r\nhow to stop moasic and stop augmentation in general\r\n\r\nI use only this line is it augment data by default or not? how to stop augmentation if exist \r\n```\r\n!python train.py --img 640 --batch 16 --epochs 400 --data /mydrive/data.yaml \\\r\n --weights /mydrive/yolov5s.pt --cache --project /mydrive/train/\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '79af1144c270ac7169553d450b9170f9c60f92e4', 'files': [{'path': 'data/hyps/hyp.scratch.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "\u914d\u7f6e\u6587\u4ef6"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["data/hyps/hyp.scratch.yaml"], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "0d8a1842373e55f8f639adede0c3d378f1ffbea5", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4717", "iss_label": "bug", "title": "[onnx export.py error] Unsupported ONNX opset version", "body": "`ONNX: starting export with onnx 1.10.1...`\r\n`ONNX: export failure: Unsupported ONNX opset version: 13`\r\n\r\nI'm using\r\nyolov5-5.0, pytorch1.7.0+cu101 and python3.7.9.\r\n\r\nHow to solve it?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0d8a1842373e55f8f639adede0c3d378f1ffbea5', 'files': [{'path': 'export.py', 'Loc': {\"(None, 'parse_opt', 166)\": {'mod': [179]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["export.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "886f1c03d839575afecb059accf74296fad395b6", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2432", "iss_label": "question", "title": "Experiments on GhostNet", "body": "## \u2754Question\r\nI am just wondering about the performance when using GhostNet in experimental.py. Could you please share this experiment?\r\n\r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '886f1c03d839575afecb059accf74296fad395b6', 'files': [{'path': 'Models/yolov5l.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "\u914d\u7f6e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["Models/yolov5l.yaml"], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "2026d4c5eb4e3e48b5295106db85c844000d95d1", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1498", "iss_label": "question\nStale", "title": "calculate fps on local system", "body": "## \u2754Question\r\nI have been using the code to do detection from webcam. How can I know what is the speed of detection (fps) in my local system?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2026d4c5eb4e3e48b5295106db85c844000d95d1', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 61)': {'mod': [61]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "14797370646d25e226f0093a5982d5cd54ba729a", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2797", "iss_label": "question", "title": "large scale dataset use --cache-images flag", "body": "## \u2754Question\r\nhello ~ , i have dataset with a million images about 450GB and i want to use --cache-images accelerate training\uff08i have 128GB RAM\uff09\uff0ccan i split the whole dataset into many sub dataset and training them one by one\uff08like resume training\uff09 \uff1f\r\n\r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '14797370646d25e226f0093a5982d5cd54ba729a', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [466]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "f5335f22bbd6037124d60edb3c2d1934d7673e23", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/8907", "iss_label": "question\nStale", "title": "I am making UI by QT for Yolov5 training. Where is making the result image (results.png) after training? ", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nI am making UI by QT for Yolov5 training. Where is making the result image (results.png) after training? \r\n\r\nI would like to draw the graph for (train/box_loss), (metrics/precision), and (metrics/recall) per each an epoch every time an epoch of the train is finished.\r\n\r\n Where is making the result image (results.png) after training? \r\n\r\nThank you for your help.\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f5335f22bbd6037124d60edb3c2d1934d7673e23', 'files': [{'path': 'utils/plots.py', 'Loc': {\"(None, 'plot_results', 418)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/plots.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "0ab303b04499b6b912d8212a4fa10fe3fcb78efa", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/8708", "iss_label": "question\nStale", "title": "Significance of --half?", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nCan you please let me know the significance of --half during training process....\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0ab303b04499b6b912d8212a4fa10fe3fcb78efa', 'files': [{'path': 'val.py', 'Loc': {\"(None, 'parse_opt', 330)\": {'mod': [351]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["val.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "b74929c910f9cd99d2ece587e57bce1ae000d3ba", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4252", "iss_label": "question", "title": "Training speed and memory", "body": "I noticed your instructions about training,\r\nRun commands below to reproduce results on COCO dataset (dataset auto-downloads on first use). Training times for YOLOv5s/m/l/x are 2/4/6/8 days on a single V100 (multi-GPU times faster). Use the largest --batch-size your GPU allows (batch sizes shown for 16 GB devices).\r\nI want to train from scratch on the coco dataset.(A100 x1).The code was just downloaded.\r\n\r\nThe following is the situation during my training.The specific parameters can be seen in the screenshot.\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 64 -> 16min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 128 ->16min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 192 ->20min/epoch\r\npython train.py --cfg models/yolov5s.yaml --data data/coco.yaml --device 0 --batch-size 192 --workers 16->16min/epoch\r\n![sendpix7](https://user-images.githubusercontent.com/39581901/127759471-a110c68f-d1d4-4580-afd2-ae8c8a17ef4a.jpg)\r\nMy question\r\n1. Why I increased the batch size but the time required for training did not decrease\r\n2. The relationship between workers and batch size, because I noticed that you seem to set it to a maximum of 8 in the code (why it is 8),\r\n3. When epoch=0 and 1, the GPU memory has changed, about x1.5? What may be the reason for this,", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b74929c910f9cd99d2ece587e57bce1ae000d3ba', 'files': [{'path': 'train.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "404749a33cc29d119f54b2ce35bf3b33a847a487", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/2186", "iss_label": "question", "title": "Can we return objectness score and class score?", "body": "## \u2754Question\r\nI am wondering if it is possible to return confidence scores for objectness and classification separately for each predicted box during inference? I might be conceptually off base here, but I am interested in understanding if the model is unsure if the box itself is correct or if the class it is assigning to the box is correct. My understanding is the `conf` that is returned now is a combo of the two? ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '404749a33cc29d119f54b2ce35bf3b33a847a487', 'files': [{'path': 'detect.py', 'Loc': {\"(None, 'detect', 18)\": {'mod': [103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113]}}, 'status': 'modified'}, {'path': 'utils/general.py', 'Loc': {\"(None, 'non_max_suppression', 340)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/general.py", "detect.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "dabad5793a638cba1e5a2bbb878c9b87fe1a14a0", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/3942", "iss_label": "enhancement\nStale", "title": "For online cutting training and detection can be improve", "body": "## \ud83d\ude80 Feature\r\n\r\nFor big image training, usually people thinking about to cut the images, but yolov5 can only resize the image to small size. Such as VisDrone dataset, the smallest image can have 960*540 size, if resize to 640*640, size would be 640*360, but the target in dataset mostly are small object, resize the image make the target become more smaller, but if use bigger resolution, the cuda memory would exceed.\r\n\r\nSo I thought online cutting training and detection would be a good feature for yolov5 to improve, although cutting image would also increase the train time, but it would be a great idea for people who don't have large computing power GPU, also I think cutting image would be effective for small object detection. Although it's not a new idea in detection, it would be a useful way for people to their own detector.\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dabad5793a638cba1e5a2bbb878c9b87fe1a14a0', 'files': [{'path': 'utils/augmentations.py', 'Loc': {\"('Albumentations', '__init__', 16)\": {'mod': [22]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["utils/augmentations.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "c8c5ef36c9a19c7843993ee8d51aebb685467eca", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/1238", "iss_label": "question", "title": "img-weights", "body": "## \u2754Question\r\nparser.add_argument('--img-weights', action='store_true', help='use weighted image selection for training')\r\nin order to make --iimg-weights work, what else I need to do? \r\ndataset = LoadImagesAndLabels(path, imgsz, batch_size,\r\n augment=augment, # augment images\r\n hyp=hyp, # augmentation hyperparameters\r\n rect=rect, # rectangular training\r\n cache_images=cache,\r\n single_cls=opt.single_cls,\r\n stride=int(stride),\r\n pad=pad),\r\n should I add an extra param image_weights=True??\r\n\r\n \r\n## Additional context\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c8c5ef36c9a19c7843993ee8d51aebb685467eca', 'files': [{'path': 'train.py', 'Loc': {'(None, None, None)': {'mod': [397]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["train.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/7072", "iss_label": "question", "title": "why can't I reproduce the mAP provided by README.md\uff08v6.1\uff09\uff1f", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nI used the method recommended by README.md(v6.1) to reproduce the mAP, but I failed. \r\n'python train.py --data coco.yaml --cfg yolov5s.yaml --weights ' ' --hyp hyp.scratch-low.yaml --img 640 --batch-size 64 --epochs 300' .\r\nAll is default value,then I got the best mAP\uff08yolov5s\uff09 is 37.057%(the best mAP verified at the end of each epoch, 5000 images), it still has a gap of 0.4% mAP(37.4%). \r\nSimilarly, I reproduced the mAP\uff08yolov5n\uff09\uff0c27.586%----28.0%\uff0cNever get published results.\r\nMy GPU is GTX NVIDIA RTX A4000\uff0816116MiB\uff09, and I think it may be enough.\r\n\r\nIs this a normal error caused by equipment\uff08GPU) differences, or are there other reasons\uff1f\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9cd89b75cca8bb165a3b19c9b8356f7b3bb22b31', 'files': [{'path': 'data/scripts/get_coco.sh', 'Loc': {'(None, None, 13)': {'mod': [13]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code\nDoc"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["data/scripts/get_coco.sh"]}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "079b36d72ba2ef298f7ae4dc283d8c7975eb02f6", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/6540", "iss_label": "question", "title": "Is YOLOv5 able to detect a specific number of classes according to the project's need, like just 2 or 3 classes?", "body": "### Search before asking\n\n- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.\n\n\n### Question\n\nHi, I'm using YOLOv5 in my project and I have a question. If I use \"--classes \" it could detect one type of class, but is there anyway that I can detect more than one type, like 2 or 3 different types? I've already tried \"-- classes 0 1\" or \"-- classes [0] [1]\" but without success. Thanks for the help!\n\n### Additional\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '079b36d72ba2ef298f7ae4dc283d8c7975eb02f6', 'files': [{'path': 'detect.py', 'Loc': {\"(None, 'parse_opt', 216)\": {'mod': [231]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["detect.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "e96c74b5a1c4a27934c5d8ad52cde778af248ed8", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4357", "iss_label": "question\nStale", "title": "Average Precision for each class", "body": "## Is there any way to see the average precision for each class?\r\n\r\nI have run my model for 1,000 epochs and I have a bunch of metrics (which are AMAZING by the way, thanks so making it so easy to see them!) and I have mAP, but I was wondering if there was a way to see the AP for each class? Like a table or something. \r\n\r\nIn addition, is it possible to see the precision-recall graphs for each class? I can see something in the images tab on wandb, but as I have 80 classes, it looks very messy. ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e96c74b5a1c4a27934c5d8ad52cde778af248ed8', 'files': [{'path': 'val.py', 'Loc': {\"(None, 'parse_opt', 293)\": {'mod': [305]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["val.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ultralytics", "repo_name": "yolov5", "base_commit": "96e36a7c913e2433446ff410a4cf60041010a524", "iss_html_url": "https://github.com/ultralytics/yolov5/issues/4152", "iss_label": "question", "title": "Format of data for testing trained model", "body": "In what format do I need to feed the validation dataset to the val.py file? Should images and markup be in the same folder or in different ones? In what format should the coordinates of the bounding boxes be in - yolo or something else?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '96e36a7c913e2433446ff410a4cf60041010a524', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "eaf5ec4467795344e7d9601515b017fd8c46e44b", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/439", "iss_label": "", "title": " decoding error in preprocessing synthesizer", "body": "I get the following error while running `synthesizer_preprocess_audio.py`.\r\n\r\n```\r\nArguments:\r\n datasets_root: /home/amin/voice_cloning/libri_100\r\n out_dir: /home/amin/voice_cloning/libri_100/SV2TTS/synthesizer\r\n n_processes: None\r\n skip_existing: True\r\n hparams: \r\n\r\nUsing data from:\r\n /home/amin/voice_cloning/libri_100/LibriSpeech/train-clean-100\r\nLibriSpeech: 0%| | 0/502 [00:00<?, ?speakers/s]\r\nmultiprocessing.pool.RemoteTraceback: \r\n\"\"\"\r\nTraceback (most recent call last):\r\n File \"/usr/lib/python3.6/multiprocessing/pool.py\", line 119, in worker\r\n result = (True, func(*args, **kwds))\r\n File \"/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py\", line 62, in preprocess_speaker\r\n alignments = [line.rstrip().split(\" \") for line in alignments_file]\r\n File \"/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py\", line 62, in <listcomp>\r\n alignments = [line.rstrip().split(\" \") for line in alignments_file]\r\n File \"/usr/lib/python3.6/codecs.py\", line 321, in decode\r\n (result, consumed) = self._buffer_decode(data, self.errors, final)\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 37: invalid start byte\r\n\"\"\"\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"synthesizer_preprocess_audio.py\", line 52, in <module>\r\n preprocess_librispeech(**vars(args)) \r\n File \"/home/amin/voice_cloning/Real-Time-Voice-Cloning-master/synthesizer/preprocess.py\", line 36, in preprocess_librispeech\r\n for speaker_metadata in tqdm(job, \"LibriSpeech\", len(speaker_dirs), unit=\"speakers\"):\r\n File \"/home/amin/.local/lib/python3.6/site-packages/tqdm/std.py\", line 1130, in __iter__\r\n for obj in iterable:\r\n File \"/usr/lib/python3.6/multiprocessing/pool.py\", line 735, in next\r\n raise value\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xa2 in position 37: invalid start byte\r\n```\r\n\r\nCan anyone help? It can save a lot of time for me.\r\nThanks.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'eaf5ec4467795344e7d9601515b017fd8c46e44b', 'files': [{'path': 'synthesizer/preprocess.py', 'Loc': {\"(None, 'preprocess_speaker', 54)\": {'mod': [60]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/629", "iss_label": "", "title": "Error in macOS when trying to launch the toolbox", "body": "Traceback (most recent call last):\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/demo_toolbox.py\", line 2, in <module>\r\n from toolbox import Toolbox\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/__init__.py\", line 1, in <module>\r\n from toolbox.ui import UI\r\n File \"/Users/luke/Documents/Real-Time-Voice-Cloning-master/toolbox/ui.py\", line 6, in <module>\r\n from encoder.inference import plot_embedding_as_heatmap\r\nModuleNotFoundError: No module named 'encoder.inference'", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'encoder/inference.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["encoder/inference.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1156", "iss_label": "", "title": "missing SV2TTS/", "body": "Hey, I'm trying to finetune the pretrained model but it looks like I am missing the SV2TTS/ directory which contains train.txt, etc.\r\nI have a saved_models/ directory which has three *.pt files for the three components of this TTS architecture.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5', 'files': [{'path': 'synthesizer_preprocess_audio.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer_preprocess_audio.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "e32cf8f4ddb63d9a7603eeb31f1855b54926aee6", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/549", "iss_label": "", "title": "Import Error", "body": "Hey, i am trying to run this code and everytime i run demo_toolbox.py there comes an error \"failed to load qt binding\" i tried reinstalling matplotlib and also tried installing PYQt5 . \r\n\r\nNeed Help !!!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e32cf8f4ddb63d9a7603eeb31f1855b54926aee6', 'files': [{'path': 'toolbox/ui.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/117", "iss_label": "", "title": "ModuleNotFoundError: No module named 'tensorflow.contrib.seq2seq'", "body": "When running demo_cli.py\r\n\r\nPython = 3.7.4\r\nTensorFlow = 2.0 RC\r\nCUDA = 10.1\r\ncuDNN = Installed for right CUDA version\r\nWindows = 10", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8e6499b10d5a074bdfe8ee6db8eec60e1060ccc1', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/275", "iss_label": "", "title": "Speaker verification implementation", "body": "I need just the speaker verification part which is the implementation of [GENERALIZED END-TO-END LOSS FOR SPEAKER VERIFICATION](https://arxiv.org/pdf/1710.10467.pdf) paper, how I can proceed to get it please?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'encoder/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5\n\u8be2\u95ee\u529f\u80fd\u5b9e\u73b0\u6240\u5728\u5730", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["encoder/"]}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "7432046efc23cabf176f9fdc8d2fd67020059478", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/855", "iss_label": "", "title": "Output audio spectrum - low frequences", "body": "Hi, Im am trying to train new model in polish language but after 476k steps output sound is very \"robotic\". I was trying to find why that's happened and noticed (based on my output and @blue-fish samples: https://blue-fish.github.io/experiments/RTVC-FT-1.html) that spectrum of this model don't include high frequences compared to google. Both in logarithmic scale. \r\n\r\nOur output: \r\n<img width=\"610\" alt=\"Zrzut ekranu 2021-10-2 o 20 29 59\" src=\"https://user-images.githubusercontent.com/6368894/135728051-397ec675-d2ac-4e5a-af89-a8e0fcef8ff7.png\">\r\n \r\nGoogle: (take a note its logarithmic scale)\r\n<img width=\"610\" alt=\"Zrzut ekranu 2021-10-2 o 20 30 30\" src=\"https://user-images.githubusercontent.com/6368894/135728056-5a7b83dd-f228-4a4f-9dae-44ce86d1e2b1.png\">\r\n\r\nDo you have any idea how to improve this?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7432046efc23cabf176f9fdc8d2fd67020059478', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [77]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1122", "iss_label": "", "title": "Requirements.txt failed to install with obscure issue with installing audioread", "body": "I ran into a few issues along the way that I was able to solve, namely errors like this:\r\n\r\n WARNING: Failed to write executable - trying to use .deleteme logic\r\n ERROR: Could not install packages due to an OSError: [WinError 2] The system cannot find the file specified: \r\n 'C:\\\\Python310\\\\Scripts\\\\f2py.exe' -> 'C:\\\\Python310\\\\Scripts\\\\f2py.exe.deleteme'\r\n\r\nI fixed these by adding `--user` to the pip command.\r\n\r\nI also had to change requirements.txt to a newer version of numpy (1.22.1) to prevent it from failing to install due to older versions not being compatible with the version of Python I already have installed (3.10.6)\r\n\r\nBut now I'm stuck on this one:\r\n\r\n Requirement already satisfied: jsonpointer>=1.9 in c:\\users\\michael\\appdata\\roaming\\python\\python310\\site-packages (from jsonpatch->visdom==0.1.8.9->-r R:\\requirements.txt (line 15)) (2.3)\r\n Using legacy 'setup.py install' for umap-learn, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for visdom, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for audioread, since package 'wheel' is not installed.\r\n Using legacy 'setup.py install' for pynndescent, since package 'wheel' is not installed.\r\n Installing collected packages: audioread, visdom, SoundFile, sounddevice, scikit-learn, resampy, pooch, matplotlib, pynndescent, librosa, umap-learn\r\n Running setup.py install for audioread ... error\r\n error: subprocess-exited-with-error\r\n \r\n \u00d7 Running setup.py install for audioread did not run successfully.\r\n \u2502 exit code: 1\r\n \u2570\u2500> [40 lines of output]\r\n C:\\Users\\michael\\AppData\\Local\\Temp\\pip-install-nat_itg2\\audioread_fa5fbfcd88364fc89c7b2a9e454b5549\\setup.py:17: DeprecationWarning: the imp module is deprecated in favour of importlib and slated for removal in Python 3.12; see the module's documentation for alternative uses\r\n import imp\r\n running install\r\n C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\command\\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.\r\n warnings.warn(\r\n Traceback (most recent call last):\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\util.py\", line 258, in subst_vars\r\n return _subst_compat(s).format_map(lookup)\r\n KeyError: 'py_version_nodot_plat'\r\n \r\n During handling of the above exception, another exception occurred:\r\n \r\n Traceback (most recent call last):\r\n File \"<string>\", line 2, in <module>\r\n File \"<pip-setuptools-caller>\", line 34, in <module>\r\n File \"C:\\Users\\michael\\AppData\\Local\\Temp\\pip-install-nat_itg2\\audioread_fa5fbfcd88364fc89c7b2a9e454b5549\\setup.py\", line 27, in <module>\r\n setup(name='audioread',\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\__init__.py\", line 153, in setup\r\n return distutils.core.setup(**attrs)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\core.py\", line 148, in setup\r\n return run_commands(dist)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\core.py\", line 163, in run_commands\r\n dist.run_commands()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\dist.py\", line 967, in run_commands\r\n self.run_command(cmd)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\dist.py\", line 985, in run_command\r\n cmd_obj.ensure_finalized()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\cmd.py\", line 107, in ensure_finalized\r\n self.finalize_options()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\command\\install.py\", line 45, in finalize_options\r\n orig.install.finalize_options(self)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 381, in finalize_options\r\n self.expand_dirs()\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 563, in expand_dirs\r\n self._expand_attrs(['install_purelib', 'install_platlib',\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\command\\install.py\", line 553, in _expand_attrs\r\n val = subst_vars(val, self.config_vars)\r\n File \"C:\\Users\\michael\\AppData\\Roaming\\Python\\Python310\\site-packages\\setuptools\\_distutils\\util.py\", line 260, in subst_vars\r\n raise ValueError(f\"invalid variable {var}\")\r\n ValueError: invalid variable 'py_version_nodot_plat'\r\n [end of output]\r\n \r\n note: This error originates from a subprocess, and is likely not a problem with pip.\r\n error: legacy-install-failure\r\n \r\n \u00d7 Encountered error while trying to install package.\r\n \u2570\u2500> audioread\r\n \r\n note: This is an issue with the package mentioned above, not pip.\r\n hint: See above for output from the failure.\r\n\r\nI'm not sure if the issue is due to \"setup.py install\" being deprecated; if that's the case I have no idea what the fix is because I think this is being required somewhere else - maybe another package needs a newer version? But I have no idea which one.\r\n\r\nI also thought maybe it could be that wheel wasn't installed, `since package 'wheel' is not installed.` but when I try to install it, it says it's already installed:\r\n\r\n C:\\> pip install wheel --user\r\n\r\n Requirement already satisfied: wheel in c:\\python310\\lib\\site-packages (0.37.1)\r\n\r\nThere's also the invalid variable error, but I have no idea what this is talking about.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '98d0ca4d4d140a4bb6bc7d54c84b1915a79041d5', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "95adc699c1deb637f485e85a5107d40da0ad94fc", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/717", "iss_label": "", "title": "I can't use Dataset/Speaker/Utterance", "body": "I can't use the upper section in the software. when loading it shows:\r\nWarning: you did not pass a root directory for datasets as argument.\r\nHow can I fix this?\r\nThank you\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '95adc699c1deb637f485e85a5107d40da0ad94fc', 'files': [{'path': 'demo_toolbox.py', 'Loc': {'(None, None, None)': {'mod': [15]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nwarning", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["demo_toolbox.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "039f7e5402e6d9da7fad5022dae038cdfb507b39", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/13", "iss_label": "", "title": "problem with utils.argutils in python 3.6", "body": "Hi under win 10 64 bits trying using python 3.6 it failed to import the print_args wiht the fact that he can't find the argutils.\r\nthink i have a relative import error but can't solve it\r\n\r\nbtw nice job on what i heard on the youtube demo\r\nif i mnaully try to import the utils from the root dir seems he load another utils files \r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '039f7e5402e6d9da7fad5022dae038cdfb507b39', 'files': [{'path': 'synthesizer/__init__.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "7432046efc23cabf176f9fdc8d2fd67020059478", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/884", "iss_label": "", "title": "Using a different speaker encoder", "body": "Hello, I really appreciate the work on display here. I was just wondering if I could use a different speaker encoder. If someone used a different encoder, could you explain the difficulties of replacing the encoder and how the results were different from the speaker encoder already in use?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7432046efc23cabf176f9fdc8d2fd67020059478', 'files': [{'path': 'toolbox/__init__.py', 'Loc': {\"('Toolbox', 'add_real_utterance', 182)\": {'mod': [191]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "a32962bb7b4827660646ac6dabf62309aea08a91", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/488", "iss_label": "", "title": "preprocessing VoxCele2 is not working", "body": "While running encoder_preprocess on voxceleb2 dataset, I'm getting the following warning and nothing else happens. Not sure why?\r\n\r\n\r\n```\r\nraw: Preprocessing data for 5994 speakers.\r\nraw: 0%| | 0/5994 [00:00<?, ?speakers/s]\r\n/home/amin/.local/lib/python3.6/site-packages/librosa/core/audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.\r\n warnings.warn('PySoundFile failed. Trying audioread instead.')\r\n/home/amin/.local/lib/python3.6/site-packages/librosa/core/audio.py:161: UserWarning: PySoundFile failed. Trying audioread instead.\r\n warnings.warn('PySoundFile failed. Trying audioread instead.')\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a32962bb7b4827660646ac6dabf62309aea08a91', 'files': [{'path': 'encoder/preprocess.py', 'Loc': {\"(None, 'preprocess_voxceleb2', 164)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["encoder/preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "0713f860a3dd41afb56e83cff84dbdf589d5e11a", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1065", "iss_label": "", "title": "vocoder_dataset.py ValueError", "body": "I am trying to use the Librispeech dataset to train the vocoder. \r\nAnd I got a ValueError while training. \r\n```numpy.random._bounded_integers._rand_int32 ValueError: low >= high```\r\n\r\nIt occurs in line 61 of vocoder_dataset.py, \r\n```mel_offsets = [np.random.randint(0, offset) for offset in max_offsets]```\r\nSo I assume there is something wrong with the value of offset? e.g. offset=0 so np.random.randint could not generate a number [0, 0)?\r\nDid anyone encountered this problem too?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0713f860a3dd41afb56e83cff84dbdf589d5e11a', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [88]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5425557efe30863267f805851f918124191e0be0", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/651", "iss_label": "", "title": "Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc", "body": "hello. \r\nPlease help me, I do not know how to solve my problem problem. \r\nI run and completed without errors \r\n`python synthesizer_preprocess_audio.py <datasets_root>`\r\n`python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer`\r\nbut after typing `python synthesizer_train.py my_run <datasets_root>/SV2TTS/synthesizer` \r\nshows me a long error\r\n\r\n\r\n```\r\nArguments:\r\n name: my_run\r\n synthesizer_root: C:\\Users\\matve\\Documents\\Tacotron\\datasets\\SV2TTS\\synthesizer\r\n models_dir: synthesizer/saved_models/\r\n mode: synthesis\r\n GTA: True\r\n restore: True\r\n summary_interval: 2500\r\n embedding_interval: 10000\r\n checkpoint_interval: 2000\r\n eval_interval: 100000\r\n tacotron_train_steps: 2000000\r\n tf_log_level: 1\r\n slack_url: None\r\n hparams: \r\n\r\nCheckpoint path: synthesizer/saved_models/logs-my_run\\taco_pretrained\\tacotron_model.ckpt\r\nLoading training data from: C:\\Users\\matve\\Documents\\Tacotron\\datasets\\SV2TTS\\synthesizer\\train.txt\r\nUsing model: Tacotron\r\nHyperparameters:\r\n allow_clipping_in_normalization: True\r\n attention_dim: 128\r\n attention_filters: 32\r\n attention_kernel: (31,)\r\n cbhg_conv_channels: 128\r\n cbhg_highway_units: 128\r\n cbhg_highwaynet_layers: 4\r\n cbhg_kernels: 8\r\n cbhg_pool_size: 2\r\n cbhg_projection: 256\r\n cbhg_projection_kernel_size: 3\r\n cbhg_rnn_units: 128\r\n cleaners: english_cleaners\r\n clip_for_wavenet: True\r\n clip_mels_length: True\r\n cross_entropy_pos_weight: 20\r\n cumulative_weights: True\r\n decoder_layers: 2\r\n decoder_lstm_units: 1024\r\n embedding_dim: 512\r\n enc_conv_channels: 512\r\n enc_conv_kernel_size: (5,)\r\n enc_conv_num_layers: 3\r\n encoder_lstm_units: 256\r\n fmax: 7600\r\n fmin: 55\r\n frame_shift_ms: None\r\n griffin_lim_iters: 60\r\n hop_size: 200\r\n mask_decoder: False\r\n mask_encoder: True\r\n max_abs_value: 4.0\r\n max_iters: 2000\r\n max_mel_frames: 900\r\n min_level_db: -100\r\n n_fft: 800\r\n natural_eval: False\r\n normalize_for_wavenet: True\r\n num_mels: 80\r\n outputs_per_step: 2\r\n postnet_channels: 512\r\n postnet_kernel_size: (5,)\r\n postnet_num_layers: 5\r\n power: 1.5\r\n predict_linear: False\r\n preemphasis: 0.97\r\n preemphasize: True\r\n prenet_layers: [256, 256]\r\n ref_level_db: 20\r\n rescale: True\r\n rescaling_max: 0.9\r\n sample_rate: 16000\r\n signal_normalization: True\r\n silence_min_duration_split: 0.4\r\n silence_threshold: 2\r\n smoothing: False\r\n speaker_embedding_size: 256\r\n split_on_cpu: True\r\n stop_at_any: True\r\n symmetric_mels: True\r\n tacotron_adam_beta1: 0.9\r\n tacotron_adam_beta2: 0.999\r\n tacotron_adam_epsilon: 1e-06\r\n tacotron_batch_size: 36\r\n tacotron_clip_gradients: True\r\n tacotron_data_random_state: 1234\r\n tacotron_decay_learning_rate: True\r\n tacotron_decay_rate: 0.5\r\n tacotron_decay_steps: 50000\r\n tacotron_dropout_rate: 0.5\r\n tacotron_final_learning_rate: 1e-05\r\n tacotron_gpu_start_idx: 0\r\n tacotron_initial_learning_rate: 0.001\r\n tacotron_num_gpus: 1\r\n tacotron_random_seed: 5339\r\n tacotron_reg_weight: 1e-07\r\n tacotron_scale_regularization: False\r\n tacotron_start_decay: 50000\r\n tacotron_swap_with_cpu: False\r\n tacotron_synthesis_batch_size: 128\r\n tacotron_teacher_forcing_decay_alpha: 0.0\r\n tacotron_teacher_forcing_decay_steps: 280000\r\n tacotron_teacher_forcing_final_ratio: 0.0\r\n tacotron_teacher_forcing_init_ratio: 1.0\r\n tacotron_teacher_forcing_mode: constant\r\n tacotron_teacher_forcing_ratio: 1.0\r\n tacotron_teacher_forcing_start_decay: 10000\r\n tacotron_test_batches: None\r\n tacotron_test_size: 0.05\r\n tacotron_zoneout_rate: 0.1\r\n train_with_GTA: False\r\n trim_fft_size: 512\r\n trim_hop_size: 128\r\n trim_top_db: 23\r\n use_lws: False\r\n utterance_min_duration: 1.6\r\n win_size: 800\r\nLoaded metadata for 290550 examples (366.70 hours)\r\ninitialisation done /gpu:0\r\nInitialized Tacotron model. Dimensions (? = dynamic shape): \r\n Train mode: True\r\n Eval mode: False\r\n GTA mode: False\r\n Synthesis mode: False\r\n Input: (?, ?)\r\n device: 0\r\n embedding: (?, ?, 512)\r\n enc conv out: (?, ?, 512)\r\n encoder out (cond): (?, ?, 768)\r\n decoder out: (?, ?, 80)\r\n residual out: (?, ?, 512)\r\n projected residual out: (?, ?, 80)\r\n mel out: (?, ?, 80)\r\n <stop_token> out: (?, ?)\r\n Tacotron Parameters 28.439 Million.\r\ninitialisation done /gpu:0\r\nInitialized Tacotron model. Dimensions (? = dynamic shape): \r\n Train mode: False\r\n Eval mode: True\r\n GTA mode: False\r\n Synthesis mode: False\r\n Input: (?, ?)\r\n device: 0\r\n embedding: (?, ?, 512)\r\n enc conv out: (?, ?, 512)\r\n encoder out (cond): (?, ?, 768)\r\n decoder out: (?, ?, 80)\r\n residual out: (?, ?, 512)\r\n projected residual out: (?, ?, 80)\r\n mel out: (?, ?, 80)\r\n <stop_token> out: (?, ?)\r\n Tacotron Parameters 28.439 Million.\r\nTacotron training set to a maximum of 2000000 steps\r\nLoading checkpoint synthesizer/saved_models/logs-my_run\\taco_pretrained\\tacotron_model.ckpt-0\r\n\r\nGenerated 64 train batches of size 36 in 3.626 sec\r\nStep 1 [5.798 sec/step, loss=14.85899, avg_loss=14.85899]\r\nStep 1 [5.798 sec/step, loss=14.85899, avg_loss=14.85899]\r\n\r\nSaving Model Character Embeddings visualization..\r\nTacotron Character embeddings have been updated on tensorboard!\r\nStep 2 [3.362 sec/step, loss=11.10468, avg_loss=12.98183]\r\nStep 2 [3.362 sec/step, loss=11.10468, avg_loss=12.98183]\r\n\r\nGenerated 403 test batches of size 36 in 15.574 sec\r\nExiting due to exception: 2 root error(s) found.\r\n (0) Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\r\n\t [[node Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d (defined at e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py:1748) ]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n\t [[Tacotron_model/clip_by_global_norm/mul_30/_479]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n (1) Resource exhausted: OOM when allocating tensor with shape[36,512,1,702] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc\r\n\t [[node Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d (defined at e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py:1748) ]]\r\nHint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.\r\n\r\n0 successful operations.\r\n0 derived errors ignored.\r\n\r\nOriginal stack trace for 'Tacotron_model/inference/postnet_convolutions/conv_layer_1_postnet_convolutions/conv1d/conv1d':\r\n File \"synthesizer_train.py\", line 55, in <module>\r\n tacotron_train(args, log_dir, hparams)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 392, in tacotron_train\r\n return train(log_dir, args, hparams)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 148, in train\r\n model, stats = model_train_mode(args, feeder, hparams, global_step)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\train.py\", line 91, in model_train_mode\r\n is_training=True, split_infos=feeder.split_infos)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\tacotron.py\", line 230, in initialize\r\n residual = postnet(decoder_output)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\modules.py\", line 406, in __call__\r\n \"conv_layer_{}_\".format(i + 1) + self.scope)\r\n File \"C:\\Users\\matve\\Documents\\Tacotron\\Real-Time-Voice-Cloning\\synthesizer\\models\\modules.py\", line 420, in conv1d\r\n padding=\"same\")\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 324, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\layers\\convolutional.py\", line 218, in conv1d\r\n return layer.apply(inputs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 324, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\base_layer.py\", line 1700, in apply\r\n return self.__call__(inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\layers\\base.py\", line 548, in __call__\r\n outputs = super(Layer, self).__call__(inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\engine\\base_layer.py\", line 854, in __call__\r\n outputs = call_fn(cast_inputs, *args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 234, in wrapper\r\n return converted_call(f, options, args, kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 439, in converted_call\r\n return _call_unconverted(f, args, kwargs, options)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\autograph\\impl\\api.py\", line 330, in _call_unconverted\r\n return f(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\layers\\convolutional.py\", line 387, in call\r\n return super(Conv1D, self).call(inputs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\keras\\layers\\convolutional.py\", line 197, in call\r\n outputs = self._convolution_op(inputs, self.kernel)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 1134, in __call__\r\n return self.conv_op(inp, filter)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 639, in __call__\r\n return self.call(inp, filter)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 238, in __call__\r\n name=self.name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 227, in _conv1d\r\n name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 574, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 574, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\nn_ops.py\", line 1681, in conv1d\r\n name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\ops\\gen_nn_ops.py\", line 1071, in conv2d\r\n data_format=data_format, dilations=dilations, name=name)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\op_def_library.py\", line 794, in _apply_op_helper\r\n op_def=op_def)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\util\\deprecation.py\", line 507, in new_func\r\n return func(*args, **kwargs)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 3357, in create_op\r\n attrs, op_def, compute_device)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 3426, in _create_op_internal\r\n op_def=op_def)\r\n File \"e:\\ProgramData\\Miniconda3\\lib\\site-packages\\tensorflow_core\\python\\framework\\ops.py\", line 1748, in __init__\r\n self._traceback = tf_stack.extract_stack()\r\n\r\n2021-02-05 20:02:33.232435: W tensorflow/core/kernels/queue_base.cc:277] _1_datafeeder/eval_queue: Skipping cancelled enqueue attempt with queue not closed\r\n2021-02-05 20:02:33.232577: W tensorflow/core/kernels/queue_base.cc:277] _0_datafeeder/input_queue: Skipping cancelled enqueue attempt with queue not closed\r\n\r\n```\r\nI think it can't use the memory of my GTX 1660 super .Tell the noob what to do\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'synthesizer/hparams.py', 'Loc': {'(None, None, None)': {'mod': [243]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["synthesizer/hparams.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "77c0bd169d8158ed1cdb180cda73c24d3cacd778", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1263", "iss_label": "", "title": "Python 3.10.12 is not supported ", "body": "When I ran python3.10 -m pip install numpy==1.20.3 on linux mint, I got an error while I was trying to install it. But it was totally fine when I used python3.8\r\n![12](https://github.com/CorentinJ/Real-Time-Voice-Cloning/assets/100217654/99071c68-bf38-4ffe-b789-9d292ed539a5)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '77c0bd169d8158ed1cdb180cda73c24d3cacd778', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, None)': {'mod': [4]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/250", "iss_label": "", "title": "[Errno 2] No such file or directory: 'encoder/_sources.txt'", "body": "I have this problem, but I can't understand what does this file contain? There is not _sources.txt in this repo", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'encoder_preprocess.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["encoder_preprocess.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "5e400d474043044ba0e3e907a74b4baccb16ee7c", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/425", "iss_label": "", "title": "Tensorflow.contrib file missing what to do", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5e400d474043044ba0e3e907a74b4baccb16ee7c', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 35)': {'mod': [35]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\nand\n2\n\u8fd9\u91cc\u662f\u6307\u5bfc\u662fdoc\n\u95ee\u9898\u539f\u56e0\u662f\u4f9d\u8d56\u7684\u5e93\u7684\u7248\u672c", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "9553eaa1748cf94814be322ec7b096d2d6bc7f28", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/419", "iss_label": "", "title": "Getting an exception when browsing for files", "body": "For some reason, importing mp3 files is not working. Anyone got an idea on why this might be the case.?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '9553eaa1748cf94814be322ec7b096d2d6bc7f28', 'files': [{'path': 'README.md', 'Loc': {'(None, None, 40)': {'mod': [40]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/221", "iss_label": "", "title": "A couple inquiries about the colab version", "body": "So I have a setup using a copy of the colaboratory version, but I want to be able to generate a few sentences at a time without having to generate per sentence.\r\n\r\nI understand that commas and periods don't work, but in the demonstration video it was mentioned that line breaks are a way to get around this for now... however that's done in the toolbox application. How would it be done in code?\r\n\r\nI've tried \\n but I assume that's only for print related arguments... but I'm fairly new to Python so excuse my ignorance.\r\n\r\nOn top of this, how could I improve the voice in colab? In regards to training, it's mentioned that a decent session requires around 500gb or more... since I don't exactly have that in colab, is there another way to go about doing this?\r\n\r\nI've tried the code with the input being longer than 10 seconds, but apparently if the input is more than 10 seconds or so the voice seems more jittery than it would be if it were capped at 10 seconds. I absolutely applaud this repo but I just really need to understand it a bit better... Thanks in advance.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'toolbox/__init__.py', 'Loc': {\"('Toolbox', 'synthesize', 158)\": {'mod': [170, 171, 172, 173, 174, 175]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["toolbox/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "c5c2261c97afe6ec5db1ef389eba1257f6cce8a2", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/225", "iss_label": "", "title": "Not code-savy but want to experiment with code", "body": "I have Python Spyder downloaded, but I do not know much about coding, or how to get to the stage where I can add audio and synthesize it. What would you recommend?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c5c2261c97afe6ec5db1ef389eba1257f6cce8a2', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "070a3c187f87136ebe92aa72766f8343772d414e", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/378", "iss_label": "", "title": "i cant install NVIDIA CUDA", "body": "I can't install NVIDIA CUDA even though I followed everything that [this guide](https://poorlydocumented.com/2019/11/installing-corentinjs-real-time-voice-cloning-project-on-windows-10-from-scratch/l) told me to do. I also have tried searching for this problem on the internet, but none of them solves my problem. I also have provided the image of the error [here](https://imgur.com/a/fYkiBYQ).\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '070a3c187f87136ebe92aa72766f8343772d414e', 'files': [{'path': 'demo_cli.py', 'Loc': {'(None, None, None)': {'mod': [34]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["demo_cli.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "CorentinJ", "repo_name": "Real-Time-Voice-Cloning", "base_commit": "9553eaa1748cf94814be322ec7b096d2d6bc7f28", "iss_html_url": "https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/420", "iss_label": "", "title": "New Audio Issue: Assertion Failed", "body": "This was working yesterday fine, and no big changes were made. \r\nHowever, today starting up the demo toolbox loaded:\r\nAssertion failed!\r\n\r\nProgram: C:\\Users\\paul1\\AppData\\Local\\Programs\\Python\\Python37\\python.exe\r\nFile: src/hostapi/wdmks/pa_win_wdmks.c, Line 1061\r\n\r\nExpression: FALSE\r\n\r\nI have tried reinstalling visual studio as well, but to no avail. Any thoughts on this would be deeply appreciated.\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "sounddevice"}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "\u5e93"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["sounddevice"]}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "39827a3998afa3ea612e7cc9a475085765d4d509", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/5134", "iss_label": "asking-for-help-with-local-system-issues", "title": "[Bug]: Non checkpoints found. Can't run without a checkpoint.", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nDuring the installation (windows), an error occurs :\r\n```\r\nvenv \"G:\\Dev\\stable-diffusion-webui\\venv\\Scripts\\Python.exe\"\r\nPython 3.10.8 (tags/v3.10.8:aaaf517, Oct 11 2022, 16:50:30) [MSC v.1933 64 bit (AMD64)]\r\nCommit hash: 9e78d2c419732711e984c4478af15ece121d64fd\r\nInstalling requirements for Web UI\r\nLaunching Web UI with arguments:\r\nNo module 'xformers'. Proceeding without it.\r\nNo checkpoints found. When searching for checkpoints, looked at:\r\n - file G:\\Dev\\stable-diffusion-webui\\model.ckpt\r\n - directory G:\\Dev\\stable-diffusion-webui\\models\\Stable-diffusion\r\nCan't run without a checkpoint. Find and place a .ckpt file into any of those locations. The program will exit.\r\n```\n\n### Steps to reproduce the problem\n\nLaunch webui-user.bat\n\n### What should have happened?\n\nInstallation complete\n\n### Commit where the problem happens\n\n9e78d2c419732711e984c4478af15ece121d64fd\n\n### What platforms do you use to access UI ?\n\nWindows\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n_No response_\n\n### Additional information, context and logs\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '39827a3998afa3ea612e7cc9a475085765d4d509', 'files': [{'path': 'modules/sd_models.py', 'Loc': {\"(None, 'load_model', 230)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": ["modules/sd_models.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/11458", "iss_label": "bug-report", "title": "[Bug]: ModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues and checked the recent builds/commits\n\n### What happened?\n\nLaunching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\r\n2023-06-27 13:53:22.297173: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-06-27 13:53:23.287285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 /content/microsoftexcel/launch.py:38 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2771 38 \u2502 main() \u2502\r\n\u2502 39 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/launch.py:34 in main \u2502\r\n\u2502 \u2502\r\n\u2502 31 \u2502 if args.test_server: \u2502\r\n\u2502 32 \u2502 \u2502 configure_for_tests() \u2502\r\n\u2502 33 \u2502 \u2502\r\n\u2502 \u2771 34 \u2502 start() \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/launch_utils.py:340 in start \u2502\r\n\u2502 \u2502\r\n\u2502 337 \u2502\r\n\u2502 338 def start(): \u2502\r\n\u2502 339 \u2502 print(f\"Launching {'API server' if '--nowebui' in sys.argv else 'W \u2502\r\n\u2502 \u2771 340 \u2502 import webui \u2502\r\n\u2502 341 \u2502 if '--nowebui' in sys.argv: \u2502\r\n\u2502 342 \u2502 \u2502 webui.api_only() \u2502\r\n\u2502 343 \u2502 else: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/webui.py:42 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 39 startup_timer.record(\"import ldm\") \u2502\r\n\u2502 40 \u2502\r\n\u2502 41 from modules import extra_networks \u2502\r\n\u2502 \u2771 42 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, \u2502\r\n\u2502 43 \u2502\r\n\u2502 44 # Truncate version number of nightly/local build of PyTorch to not cau \u2502\r\n\u2502 45 if \".dev\" in torch.__version__ or \"+git\" in torch.__version__: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/call_queue.py:5 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 2 import threading \u2502\r\n\u2502 3 import time \u2502\r\n\u2502 4 \u2502\r\n\u2502 \u2771 5 from modules import shared, progress, errors \u2502\r\n\u2502 6 \u2502\r\n\u2502 7 queue_lock = threading.Lock() \u2502\r\n\u2502 8 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/shared.py:18 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 15 import modules.devices as devices \u2502\r\n\u2502 16 from modules import localization, script_loading, errors, ui_component \u2502\r\n\u2502 17 from modules.paths_internal import models_path, script_path, data_path \u2502\r\n\u2502 \u2771 18 from ldm.models.diffusion.ddpm import LatentDiffusion \u2502\r\n\u2502 19 from typing import Optional \u2502\r\n\u2502 20 \u2502\r\n\u2502 21 demo = None \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/repositories/stable-diffusion-stability-ai/ldm/model \u2502\r\n\u2502 s/diffusion/ddpm.py:20 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 17 import itertools \u2502\r\n\u2502 18 from tqdm import tqdm \u2502\r\n\u2502 19 from torchvision.utils import make_grid \u2502\r\n\u2502 \u2771 20 from pytorch_lightning.utilities.distributed import rank_zero_only \u2502\r\n\u2502 21 from omegaconf import ListConfig \u2502\r\n\u2502 22 \u2502\r\n\u2502 23 from ldm.util import log_txt_as_img, exists, default, ismap, isimage, \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'\n\n### Steps to reproduce the problem\n\n1. on colab\r\n2. try to use the new 1.4.0 release\r\n3. error\n\n### What should have happened?\n\nno error\n\n### Version or Commit where the problem happens\n\n1.4.0\n\n### What Python version are you running on ?\n\nNone\n\n### What platforms do you use to access the UI ?\n\nOther/Cloud\n\n### What device are you running WebUI on?\n\n_No response_\n\n### Cross attention optimization\n\nAutomatic\n\n### What browsers do you use to access the UI ?\n\nGoogle Chrome\n\n### Command Line Arguments\n\n```Shell\n!COMMANDLINE_ARGS=\"--share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\" REQS_FILE=\"requirements.txt\" python launch.py\n```\n\n\n### List of extensions\n\nsd-webui-tunnels\r\ncontrolnet\r\nopenpose-editor\r\nposex\r\na1111-sd-webui-tagcomplete\r\nsupermerger\r\nultimate-upscale-for-automatic1111\r\na111 locon extension\r\nimages browser\r\n\n\n### Console logs\n\n```Shell\n**truncated on colab**\r\n\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1217 100 1217 0 0 3699 0 --:--:-- --:--:-- --:--:-- 3699\r\n100 1722k 100 1722k 0 0 670k 0 0:00:02 0:00:02 --:--:-- 1355k\r\nArchive: /content/microsoftexcel.zip\r\n creating: microsoftexcel/\r\n inflating: microsoftexcel/.eslintignore \r\n inflating: microsoftexcel/.eslintrc.js \r\n inflating: microsoftexcel/.git-blame-ignore-revs \r\n creating: microsoftexcel/.github/\r\n creating: microsoftexcel/.github/ISSUE_TEMPLATE/\r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/bug_report.yml \r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/config.yml \r\n inflating: microsoftexcel/.github/ISSUE_TEMPLATE/feature_request.yml \r\n inflating: microsoftexcel/.github/pull_request_template.md \r\n creating: microsoftexcel/.github/workflows/\r\n inflating: microsoftexcel/.github/workflows/on_pull_request.yaml \r\n inflating: microsoftexcel/.github/workflows/run_tests.yaml \r\n inflating: microsoftexcel/.gitignore \r\n inflating: microsoftexcel/.pylintrc \r\n inflating: microsoftexcel/CHANGELOG.md \r\n inflating: microsoftexcel/CODEOWNERS \r\n creating: microsoftexcel/configs/\r\n inflating: microsoftexcel/configs/alt-diffusion-inference.yaml \r\n inflating: microsoftexcel/configs/instruct-pix2pix.yaml \r\n inflating: microsoftexcel/configs/v1-inference.yaml \r\n inflating: microsoftexcel/configs/v1-inpainting-inference.yaml \r\n creating: microsoftexcel/embeddings/\r\n extracting: microsoftexcel/embeddings/Place Textual Inversion embeddings here.txt \r\n inflating: microsoftexcel/environment-wsl2.yaml \r\n creating: microsoftexcel/extensions/\r\n extracting: microsoftexcel/extensions/put extensions here.txt \r\n creating: microsoftexcel/extensions-builtin/\r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/\r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/javascript/\r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/javascript/zoom.js \r\n creating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/scripts/\r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/scripts/hotkey_config.py \r\n inflating: microsoftexcel/extensions-builtin/canvas-zoom-and-pan/style.css \r\n creating: microsoftexcel/extensions-builtin/extra-options-section/\r\n creating: microsoftexcel/extensions-builtin/extra-options-section/scripts/\r\n inflating: microsoftexcel/extensions-builtin/extra-options-section/scripts/extra_options_section.py \r\n creating: microsoftexcel/extensions-builtin/LDSR/\r\n inflating: microsoftexcel/extensions-builtin/LDSR/ldsr_model_arch.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/preload.py \r\n creating: microsoftexcel/extensions-builtin/LDSR/scripts/\r\n inflating: microsoftexcel/extensions-builtin/LDSR/scripts/ldsr_model.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/sd_hijack_autoencoder.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/sd_hijack_ddpm_v1.py \r\n inflating: microsoftexcel/extensions-builtin/LDSR/vqvae_quantize.py \r\n creating: microsoftexcel/extensions-builtin/Lora/\r\n inflating: microsoftexcel/extensions-builtin/Lora/extra_networks_lora.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/lora.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/preload.py \r\n creating: microsoftexcel/extensions-builtin/Lora/scripts/\r\n inflating: microsoftexcel/extensions-builtin/Lora/scripts/lora_script.py \r\n inflating: microsoftexcel/extensions-builtin/Lora/ui_extra_networks_lora.py \r\n creating: microsoftexcel/extensions-builtin/prompt-bracket-checker/\r\n creating: microsoftexcel/extensions-builtin/prompt-bracket-checker/javascript/\r\n inflating: microsoftexcel/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js \r\n creating: microsoftexcel/extensions-builtin/ScuNET/\r\n inflating: microsoftexcel/extensions-builtin/ScuNET/preload.py \r\n creating: microsoftexcel/extensions-builtin/ScuNET/scripts/\r\n inflating: microsoftexcel/extensions-builtin/ScuNET/scripts/scunet_model.py \r\n inflating: microsoftexcel/extensions-builtin/ScuNET/scunet_model_arch.py \r\n creating: microsoftexcel/extensions-builtin/SwinIR/\r\n inflating: microsoftexcel/extensions-builtin/SwinIR/preload.py \r\n creating: microsoftexcel/extensions-builtin/SwinIR/scripts/\r\n inflating: microsoftexcel/extensions-builtin/SwinIR/scripts/swinir_model.py \r\n inflating: microsoftexcel/extensions-builtin/SwinIR/swinir_model_arch.py \r\n inflating: microsoftexcel/extensions-builtin/SwinIR/swinir_model_arch_v2.py \r\n creating: microsoftexcel/html/\r\n inflating: microsoftexcel/html/card-no-preview.png \r\n inflating: microsoftexcel/html/extra-networks-card.html \r\n inflating: microsoftexcel/html/extra-networks-no-cards.html \r\n inflating: microsoftexcel/html/footer.html \r\n inflating: microsoftexcel/html/image-update.svg \r\n inflating: microsoftexcel/html/licenses.html \r\n creating: microsoftexcel/javascript/\r\n inflating: microsoftexcel/javascript/aspectRatioOverlay.js \r\n inflating: microsoftexcel/javascript/contextMenus.js \r\n inflating: microsoftexcel/javascript/dragdrop.js \r\n inflating: microsoftexcel/javascript/edit-attention.js \r\n inflating: microsoftexcel/javascript/extensions.js \r\n inflating: microsoftexcel/javascript/extraNetworks.js \r\n inflating: microsoftexcel/javascript/generationParams.js \r\n inflating: microsoftexcel/javascript/hints.js \r\n inflating: microsoftexcel/javascript/hires_fix.js \r\n inflating: microsoftexcel/javascript/imageMaskFix.js \r\n inflating: microsoftexcel/javascript/imageviewer.js \r\n inflating: microsoftexcel/javascript/imageviewerGamepad.js \r\n inflating: microsoftexcel/javascript/localization.js \r\n inflating: microsoftexcel/javascript/notification.js \r\n inflating: microsoftexcel/javascript/profilerVisualization.js \r\n inflating: microsoftexcel/javascript/progressbar.js \r\n inflating: microsoftexcel/javascript/textualInversion.js \r\n inflating: microsoftexcel/javascript/token-counters.js \r\n inflating: microsoftexcel/javascript/ui.js \r\n inflating: microsoftexcel/javascript/ui_settings_hints.js \r\n inflating: microsoftexcel/launch.py \r\n inflating: microsoftexcel/LICENSE.txt \r\n creating: microsoftexcel/localizations/\r\n extracting: microsoftexcel/localizations/Put localization files here.txt \r\n creating: microsoftexcel/models/\r\n creating: microsoftexcel/models/deepbooru/\r\n extracting: microsoftexcel/models/deepbooru/Put your deepbooru release project folder here.txt \r\n creating: microsoftexcel/models/karlo/\r\n inflating: microsoftexcel/models/karlo/ViT-L-14_stats.th \r\n creating: microsoftexcel/models/Stable-diffusion/\r\n extracting: microsoftexcel/models/Stable-diffusion/Put Stable Diffusion checkpoints here.txt \r\n creating: microsoftexcel/models/VAE/\r\n extracting: microsoftexcel/models/VAE/Put VAE here.txt \r\n creating: microsoftexcel/models/VAE-approx/\r\n inflating: microsoftexcel/models/VAE-approx/model.pt \r\n creating: microsoftexcel/modules/\r\n creating: microsoftexcel/modules/api/\r\n inflating: microsoftexcel/modules/api/api.py \r\n inflating: microsoftexcel/modules/api/models.py \r\n inflating: microsoftexcel/modules/call_queue.py \r\n inflating: microsoftexcel/modules/cmd_args.py \r\n creating: microsoftexcel/modules/codeformer/\r\n inflating: microsoftexcel/modules/codeformer/codeformer_arch.py \r\n inflating: microsoftexcel/modules/codeformer/vqgan_arch.py \r\n inflating: microsoftexcel/modules/codeformer_model.py \r\n inflating: microsoftexcel/modules/config_states.py \r\n inflating: microsoftexcel/modules/deepbooru.py \r\n inflating: microsoftexcel/modules/deepbooru_model.py \r\n inflating: microsoftexcel/modules/devices.py \r\n inflating: microsoftexcel/modules/errors.py \r\n inflating: microsoftexcel/modules/esrgan_model.py \r\n inflating: microsoftexcel/modules/esrgan_model_arch.py \r\n inflating: microsoftexcel/modules/extensions.py \r\n inflating: microsoftexcel/modules/extras.py \r\n inflating: microsoftexcel/modules/extra_networks.py \r\n inflating: microsoftexcel/modules/extra_networks_hypernet.py \r\n inflating: microsoftexcel/modules/face_restoration.py \r\n inflating: microsoftexcel/modules/generation_parameters_copypaste.py \r\n inflating: microsoftexcel/modules/gfpgan_model.py \r\n inflating: microsoftexcel/modules/gitpython_hack.py \r\n inflating: microsoftexcel/modules/hashes.py \r\n creating: microsoftexcel/modules/hypernetworks/\r\n inflating: microsoftexcel/modules/hypernetworks/hypernetwork.py \r\n inflating: microsoftexcel/modules/hypernetworks/ui.py \r\n inflating: microsoftexcel/modules/images.py \r\n inflating: microsoftexcel/modules/img2img.py \r\n inflating: microsoftexcel/modules/import_hook.py \r\n inflating: microsoftexcel/modules/interrogate.py \r\n inflating: microsoftexcel/modules/launch_utils.py \r\n inflating: microsoftexcel/modules/localization.py \r\n inflating: microsoftexcel/modules/lowvram.py \r\n inflating: microsoftexcel/modules/mac_specific.py \r\n inflating: microsoftexcel/modules/masking.py \r\n inflating: microsoftexcel/modules/memmon.py \r\n inflating: microsoftexcel/modules/modelloader.py \r\n creating: microsoftexcel/modules/models/\r\n creating: microsoftexcel/modules/models/diffusion/\r\n inflating: microsoftexcel/modules/models/diffusion/ddpm_edit.py \r\n creating: microsoftexcel/modules/models/diffusion/uni_pc/\r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/sampler.py \r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/uni_pc.py \r\n inflating: microsoftexcel/modules/models/diffusion/uni_pc/__init__.py \r\n inflating: microsoftexcel/modules/ngrok.py \r\n inflating: microsoftexcel/modules/paths.py \r\n inflating: microsoftexcel/modules/paths_internal.py \r\n inflating: microsoftexcel/modules/postprocessing.py \r\n inflating: microsoftexcel/modules/processing.py \r\n inflating: microsoftexcel/modules/progress.py \r\n inflating: microsoftexcel/modules/prompt_parser.py \r\n inflating: microsoftexcel/modules/realesrgan_model.py \r\n inflating: microsoftexcel/modules/restart.py \r\n inflating: microsoftexcel/modules/Roboto-Regular.ttf \r\n inflating: microsoftexcel/modules/safe.py \r\n inflating: microsoftexcel/modules/scripts.py \r\n inflating: microsoftexcel/modules/scripts_auto_postprocessing.py \r\n inflating: microsoftexcel/modules/scripts_postprocessing.py \r\n inflating: microsoftexcel/modules/script_callbacks.py \r\n inflating: microsoftexcel/modules/script_loading.py \r\n inflating: microsoftexcel/modules/sd_disable_initialization.py \r\n inflating: microsoftexcel/modules/sd_hijack.py \r\n inflating: microsoftexcel/modules/sd_hijack_checkpoint.py \r\n inflating: microsoftexcel/modules/sd_hijack_clip.py \r\n inflating: microsoftexcel/modules/sd_hijack_clip_old.py \r\n inflating: microsoftexcel/modules/sd_hijack_inpainting.py \r\n inflating: microsoftexcel/modules/sd_hijack_ip2p.py \r\n inflating: microsoftexcel/modules/sd_hijack_open_clip.py \r\n inflating: microsoftexcel/modules/sd_hijack_optimizations.py \r\n inflating: microsoftexcel/modules/sd_hijack_unet.py \r\n inflating: microsoftexcel/modules/sd_hijack_utils.py \r\n inflating: microsoftexcel/modules/sd_hijack_xlmr.py \r\n inflating: microsoftexcel/modules/sd_models.py \r\n inflating: microsoftexcel/modules/sd_models_config.py \r\n inflating: microsoftexcel/modules/sd_samplers.py \r\n inflating: microsoftexcel/modules/sd_samplers_common.py \r\n inflating: microsoftexcel/modules/sd_samplers_compvis.py \r\n inflating: microsoftexcel/modules/sd_samplers_kdiffusion.py \r\n inflating: microsoftexcel/modules/sd_unet.py \r\n inflating: microsoftexcel/modules/sd_vae.py \r\n inflating: microsoftexcel/modules/sd_vae_approx.py \r\n inflating: microsoftexcel/modules/sd_vae_taesd.py \r\n inflating: microsoftexcel/modules/shared.py \r\n inflating: microsoftexcel/modules/shared_items.py \r\n inflating: microsoftexcel/modules/styles.py \r\n inflating: microsoftexcel/modules/sub_quadratic_attention.py \r\n inflating: microsoftexcel/modules/sysinfo.py \r\n creating: microsoftexcel/modules/textual_inversion/\r\n inflating: microsoftexcel/modules/textual_inversion/autocrop.py \r\n inflating: microsoftexcel/modules/textual_inversion/dataset.py \r\n inflating: microsoftexcel/modules/textual_inversion/image_embedding.py \r\n inflating: microsoftexcel/modules/textual_inversion/learn_schedule.py \r\n inflating: microsoftexcel/modules/textual_inversion/logging.py \r\n inflating: microsoftexcel/modules/textual_inversion/preprocess.py \r\n inflating: microsoftexcel/modules/textual_inversion/test_embedding.png \r\n inflating: microsoftexcel/modules/textual_inversion/textual_inversion.py \r\n inflating: microsoftexcel/modules/textual_inversion/ui.py \r\n inflating: microsoftexcel/modules/timer.py \r\n inflating: microsoftexcel/modules/txt2img.py \r\n inflating: microsoftexcel/modules/ui.py \r\n inflating: microsoftexcel/modules/ui_common.py \r\n inflating: microsoftexcel/modules/ui_components.py \r\n inflating: microsoftexcel/modules/ui_extensions.py \r\n inflating: microsoftexcel/modules/ui_extra_networks.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_checkpoints.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_hypernets.py \r\n inflating: microsoftexcel/modules/ui_extra_networks_textual_inversion.py \r\n inflating: microsoftexcel/modules/ui_gradio_extensions.py \r\n inflating: microsoftexcel/modules/ui_loadsave.py \r\n inflating: microsoftexcel/modules/ui_postprocessing.py \r\n inflating: microsoftexcel/modules/ui_settings.py \r\n inflating: microsoftexcel/modules/ui_tempdir.py \r\n inflating: microsoftexcel/modules/upscaler.py \r\n inflating: microsoftexcel/modules/xlmr.py \r\n inflating: microsoftexcel/package.json \r\n inflating: microsoftexcel/pyproject.toml \r\n inflating: microsoftexcel/README.md \r\n inflating: microsoftexcel/requirements-test.txt \r\n inflating: microsoftexcel/requirements.txt \r\n inflating: microsoftexcel/requirements_versions.txt \r\n inflating: microsoftexcel/screenshot.png \r\n inflating: microsoftexcel/script.js \r\n creating: microsoftexcel/scripts/\r\n inflating: microsoftexcel/scripts/custom_code.py \r\n inflating: microsoftexcel/scripts/img2imgalt.py \r\n inflating: microsoftexcel/scripts/loopback.py \r\n inflating: microsoftexcel/scripts/outpainting_mk_2.py \r\n inflating: microsoftexcel/scripts/poor_mans_outpainting.py \r\n inflating: microsoftexcel/scripts/postprocessing_codeformer.py \r\n inflating: microsoftexcel/scripts/postprocessing_gfpgan.py \r\n inflating: microsoftexcel/scripts/postprocessing_upscale.py \r\n inflating: microsoftexcel/scripts/prompts_from_file.py \r\n inflating: microsoftexcel/scripts/prompt_matrix.py \r\n inflating: microsoftexcel/scripts/sd_upscale.py \r\n inflating: microsoftexcel/scripts/xyz_grid.py \r\n inflating: microsoftexcel/style.css \r\n creating: microsoftexcel/test/\r\n inflating: microsoftexcel/test/conftest.py \r\n inflating: microsoftexcel/test/test_extras.py \r\n creating: microsoftexcel/test/test_files/\r\n inflating: microsoftexcel/test/test_files/empty.pt \r\n inflating: microsoftexcel/test/test_files/img2img_basic.png \r\n inflating: microsoftexcel/test/test_files/mask_basic.png \r\n inflating: microsoftexcel/test/test_img2img.py \r\n inflating: microsoftexcel/test/test_txt2img.py \r\n inflating: microsoftexcel/test/test_utils.py \r\n extracting: microsoftexcel/test/__init__.py \r\n creating: microsoftexcel/textual_inversion_templates/\r\n inflating: microsoftexcel/textual_inversion_templates/hypernetwork.txt \r\n inflating: microsoftexcel/textual_inversion_templates/none.txt \r\n inflating: microsoftexcel/textual_inversion_templates/style.txt \r\n inflating: microsoftexcel/textual_inversion_templates/style_filewords.txt \r\n inflating: microsoftexcel/textual_inversion_templates/subject.txt \r\n inflating: microsoftexcel/textual_inversion_templates/subject_filewords.txt \r\n inflating: microsoftexcel/webui-macos-env.sh \r\n inflating: microsoftexcel/webui-user.bat \r\n inflating: microsoftexcel/webui-user.sh \r\n inflating: microsoftexcel/webui.bat \r\n inflating: microsoftexcel/webui.py \r\n inflating: microsoftexcel/webui.sh \r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-tunnels'...\r\nremote: Enumerating objects: 143, done.\r\nremote: Counting objects: 100% (38/38), done.\r\nremote: Compressing objects: 100% (14/14), done.\r\nremote: Total 143 (delta 35), reused 24 (delta 24), pack-reused 105\r\nReceiving objects: 100% (143/143), 26.38 KiB | 13.19 MiB/s, done.\r\nResolving deltas: 100% (62/62), done.\r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-controlnet'...\r\nremote: Enumerating objects: 7327, done.\r\nremote: Counting objects: 100% (2275/2275), done.\r\nremote: Compressing objects: 100% (128/128), done.\r\nremote: Total 7327 (delta 2172), reused 2178 (delta 2147), pack-reused 5052\r\nReceiving objects: 100% (7327/7327), 15.36 MiB | 9.38 MiB/s, done.\r\nResolving deltas: 100% (4220/4220), done.\r\nCloning into '/content/microsoftexcel/extensions/openpose-editor'...\r\nremote: Enumerating objects: 403, done.\r\nremote: Counting objects: 100% (123/123), done.\r\nremote: Compressing objects: 100% (56/56), done.\r\nremote: Total 403 (delta 88), reused 80 (delta 67), pack-reused 280\r\nReceiving objects: 100% (403/403), 1.15 MiB | 14.54 MiB/s, done.\r\nResolving deltas: 100% (170/170), done.\r\nCloning into '/content/microsoftexcel/extensions/posex'...\r\nremote: Enumerating objects: 407, done.\r\nremote: Counting objects: 100% (43/43), done.\r\nremote: Compressing objects: 100% (19/19), done.\r\nremote: Total 407 (delta 21), reused 35 (delta 19), pack-reused 364\r\nReceiving objects: 100% (407/407), 11.39 MiB | 8.04 MiB/s, done.\r\nResolving deltas: 100% (196/196), done.\r\nCloning into '/content/microsoftexcel/extensions/a1111-microsoftexcel-tagcomplete'...\r\nremote: Enumerating objects: 1341, done.\r\nremote: Counting objects: 100% (1341/1341), done.\r\nremote: Compressing objects: 100% (505/505), done.\r\nremote: Total 1341 (delta 783), reused 1251 (delta 775), pack-reused 0\r\nReceiving objects: 100% (1341/1341), 3.85 MiB | 4.02 MiB/s, done.\r\nResolving deltas: 100% (783/783), done.\r\nCloning into '/content/microsoftexcel/extensions/microsoftexcel-supermerger'...\r\nremote: Enumerating objects: 720, done.\r\nremote: Counting objects: 100% (237/237), done.\r\nremote: Compressing objects: 100% (94/94), done.\r\nremote: Total 720 (delta 180), reused 183 (delta 143), pack-reused 483\r\nReceiving objects: 100% (720/720), 307.44 KiB | 13.37 MiB/s, done.\r\nResolving deltas: 100% (374/374), done.\r\nCloning into '/content/microsoftexcel/extensions/ultimate-upscale-for-automatic1111'...\r\nremote: Enumerating objects: 309, done.\r\nremote: Counting objects: 100% (84/84), done.\r\nremote: Compressing objects: 100% (46/46), done.\r\nremote: Total 309 (delta 34), reused 64 (delta 23), pack-reused 225\r\nReceiving objects: 100% (309/309), 32.23 MiB | 11.17 MiB/s, done.\r\nResolving deltas: 100% (109/109), done.\r\nCloning into '/content/microsoftexcel/extensions/a1111-microsoftexcel-locon'...\r\nremote: Enumerating objects: 188, done.\r\nremote: Counting objects: 100% (43/43), done.\r\nremote: Compressing objects: 100% (20/20), done.\r\nremote: Total 188 (delta 18), reused 40 (delta 17), pack-reused 145\r\nReceiving objects: 100% (188/188), 47.64 KiB | 15.88 MiB/s, done.\r\nResolving deltas: 100% (93/93), done.\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1229 100 1229 0 0 4708 0 --:--:-- --:--:-- --:--:-- 4708\r\n100 68776 100 68776 0 0 239k 0 --:--:-- --:--:-- --:--:-- 239k\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1195 100 1195 0 0 5063 0 --:--:-- --:--:-- --:--:-- 5063\r\n100 1509k 100 1509k 0 0 5428k 0 --:--:-- --:--:-- --:--:-- 5428k\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1191 100 1191 0 0 4983 0 --:--:-- --:--:-- --:--:-- 4962\r\n100 118M 100 118M 0 0 212M 0 --:--:-- --:--:-- --:--:-- 212M\r\n/content/microsoftexcel/extensions\r\nArchive: /content/microsoftexcel/extensions/microsoftexcel-images-browser.zip\r\n creating: sd-webui-images-browser/\r\n inflating: sd-webui-images-browser/.DS_Store \r\n creating: sd-webui-images-browser/.git/\r\n creating: sd-webui-images-browser/.git/branches/\r\n inflating: sd-webui-images-browser/.git/config \r\n inflating: sd-webui-images-browser/.git/description \r\n inflating: sd-webui-images-browser/.git/HEAD \r\n creating: sd-webui-images-browser/.git/hooks/\r\n inflating: sd-webui-images-browser/.git/hooks/applypatch-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/commit-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/fsmonitor-watchman.sample \r\n inflating: sd-webui-images-browser/.git/hooks/post-update.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-applypatch.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-commit.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-merge-commit.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-push.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-rebase.sample \r\n inflating: sd-webui-images-browser/.git/hooks/pre-receive.sample \r\n inflating: sd-webui-images-browser/.git/hooks/prepare-commit-msg.sample \r\n inflating: sd-webui-images-browser/.git/hooks/update.sample \r\n inflating: sd-webui-images-browser/.git/index \r\n creating: sd-webui-images-browser/.git/info/\r\n inflating: sd-webui-images-browser/.git/info/exclude \r\n creating: sd-webui-images-browser/.git/logs/\r\n inflating: sd-webui-images-browser/.git/logs/HEAD \r\n creating: sd-webui-images-browser/.git/logs/refs/\r\n creating: sd-webui-images-browser/.git/logs/refs/heads/\r\n inflating: sd-webui-images-browser/.git/logs/refs/heads/main \r\n creating: sd-webui-images-browser/.git/logs/refs/remotes/\r\n creating: sd-webui-images-browser/.git/logs/refs/remotes/origin/\r\n inflating: sd-webui-images-browser/.git/logs/refs/remotes/origin/HEAD \r\n creating: sd-webui-images-browser/.git/objects/\r\n creating: sd-webui-images-browser/.git/objects/info/\r\n creating: sd-webui-images-browser/.git/objects/pack/\r\n inflating: sd-webui-images-browser/.git/objects/pack/pack-8c09dc0723b064b3aad4351dc4af51e311b0601c.idx \r\n inflating: sd-webui-images-browser/.git/objects/pack/pack-8c09dc0723b064b3aad4351dc4af51e311b0601c.pack \r\n inflating: sd-webui-images-browser/.git/packed-refs \r\n creating: sd-webui-images-browser/.git/refs/\r\n creating: sd-webui-images-browser/.git/refs/heads/\r\n inflating: sd-webui-images-browser/.git/refs/heads/main \r\n creating: sd-webui-images-browser/.git/refs/remotes/\r\n creating: sd-webui-images-browser/.git/refs/remotes/origin/\r\n inflating: sd-webui-images-browser/.git/refs/remotes/origin/HEAD \r\n creating: sd-webui-images-browser/.git/refs/tags/\r\n inflating: sd-webui-images-browser/.gitignore \r\n creating: sd-webui-images-browser/javascript/\r\n inflating: sd-webui-images-browser/javascript/images_history.js \r\n inflating: sd-webui-images-browser/README.md \r\n creating: sd-webui-images-browser/scripts/\r\n inflating: sd-webui-images-browser/scripts/images_history.py \r\n/content/microsoftexcel/embeddings\r\nArchive: /content/microsoftexcel/embeddings/embeddings.zip\r\n creating: embeddings/\r\n inflating: embeddings/21charturnerv2.pt \r\n inflating: embeddings/Asian-Less-Neg.pt \r\n inflating: embeddings/bad-artist-anime.pt \r\n inflating: embeddings/bad-artist.pt \r\n inflating: embeddings/bad-hands-5.pt \r\n inflating: embeddings/bad-image-v2-39000.pt \r\n inflating: embeddings/bad-picture-chill-75v.pt \r\n inflating: embeddings/BadDream.pt \r\n inflating: embeddings/badhandv4.pt \r\n inflating: embeddings/bad_pictures.pt \r\n inflating: embeddings/bad_prompt.pt \r\n inflating: embeddings/bad_prompt_version2.pt \r\n inflating: embeddings/charturnerv2.pt \r\n inflating: embeddings/CyberRealistic_Negative-neg.pt \r\n inflating: embeddings/easynegative.safetensors \r\n inflating: embeddings/EasyNegativeV2.safetensors \r\n inflating: embeddings/epiCNegative.pt \r\n inflating: embeddings/epiCRealism.pt \r\n inflating: embeddings/FastNegativeEmbedding.pt \r\n inflating: embeddings/HyperStylizeV6.pt \r\n inflating: embeddings/nartfixer.pt \r\n inflating: embeddings/negative_hand-neg.pt \r\n inflating: embeddings/nfixer.pt \r\n inflating: embeddings/ng_deepnegative_v1_75t.pt \r\n inflating: embeddings/nrealfixer.pt \r\n inflating: embeddings/pureerosface_v1.pt \r\n inflating: embeddings/rmadanegative402_sd15-neg.pt \r\n inflating: embeddings/ulzzang-6500-v1.1.bin \r\n inflating: embeddings/ulzzang-6500.pt \r\n inflating: embeddings/UnrealisticDream.pt \r\n inflating: embeddings/verybadimagenegative_v1.3.pt \r\n/content/microsoftexcel/models/ESRGAN\r\nArchive: /content/microsoftexcel/models/ESRGAN/upscalers.zip\r\n inflating: 4x-UltraSharp.pth \r\n inflating: 4x_foolhardy_Remacri.pth \r\n/content\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 1133 100 1133 0 0 4800 0 --:--:-- --:--:-- --:--:-- 4800\r\n100 4067M 100 4067M 0 0 221M 0 0:00:18 0:00:18 --:--:-- 242M\r\n/content/microsoftexcel\r\nfatal: not a git repository (or any of the parent directories): .git\r\nfatal: not a git repository (or any of the parent directories): .git\r\nPython 3.10.12 (main, Jun 7 2023, 12:45:35) [GCC 9.4.0]\r\nVersion: ## 1.4.0\r\nCommit hash: <none>\r\nInstalling gfpgan\r\nInstalling clip\r\nInstalling open_clip\r\nInstalling xformers\r\nCloning Stable Diffusion into /content/microsoftexcel/repositories/stable-diffusion-stability-ai...\r\nCloning K-diffusion into /content/microsoftexcel/repositories/k-diffusion...\r\nCloning CodeFormer into /content/microsoftexcel/repositories/CodeFormer...\r\nCloning BLIP into /content/microsoftexcel/repositories/BLIP...\r\nInstalling requirements for CodeFormer\r\nInstalling requirements\r\nInstalling sd-webui-controlnet requirement: mediapipe\r\nInstalling sd-webui-controlnet requirement: svglib\r\nInstalling sd-webui-controlnet requirement: fvcore\r\n\r\nInstalling pycloudflared\r\n\r\nInstalling diffusers\r\n\r\nLaunching Web UI with arguments: --share --disable-safe-unpickle --no-half-vae --xformers --enable-insecure-extension --gradio-queue\r\n2023-06-27 13:53:22.297173: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.\r\nTo enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.\r\n2023-06-27 13:53:23.287285: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\r\n\u256d\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 Traceback (most recent call last) \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256e\r\n\u2502 /content/microsoftexcel/launch.py:38 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2771 38 \u2502 main() \u2502\r\n\u2502 39 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/launch.py:34 in main \u2502\r\n\u2502 \u2502\r\n\u2502 31 \u2502 if args.test_server: \u2502\r\n\u2502 32 \u2502 \u2502 configure_for_tests() \u2502\r\n\u2502 33 \u2502 \u2502\r\n\u2502 \u2771 34 \u2502 start() \u2502\r\n\u2502 35 \u2502\r\n\u2502 36 \u2502\r\n\u2502 37 if __name__ == \"__main__\": \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/launch_utils.py:340 in start \u2502\r\n\u2502 \u2502\r\n\u2502 337 \u2502\r\n\u2502 338 def start(): \u2502\r\n\u2502 339 \u2502 print(f\"Launching {'API server' if '--nowebui' in sys.argv else 'W \u2502\r\n\u2502 \u2771 340 \u2502 import webui \u2502\r\n\u2502 341 \u2502 if '--nowebui' in sys.argv: \u2502\r\n\u2502 342 \u2502 \u2502 webui.api_only() \u2502\r\n\u2502 343 \u2502 else: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/webui.py:42 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 39 startup_timer.record(\"import ldm\") \u2502\r\n\u2502 40 \u2502\r\n\u2502 41 from modules import extra_networks \u2502\r\n\u2502 \u2771 42 from modules.call_queue import wrap_gradio_gpu_call, wrap_queued_call, \u2502\r\n\u2502 43 \u2502\r\n\u2502 44 # Truncate version number of nightly/local build of PyTorch to not cau \u2502\r\n\u2502 45 if \".dev\" in torch.__version__ or \"+git\" in torch.__version__: \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/call_queue.py:5 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 2 import threading \u2502\r\n\u2502 3 import time \u2502\r\n\u2502 4 \u2502\r\n\u2502 \u2771 5 from modules import shared, progress, errors \u2502\r\n\u2502 6 \u2502\r\n\u2502 7 queue_lock = threading.Lock() \u2502\r\n\u2502 8 \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/modules/shared.py:18 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 15 import modules.devices as devices \u2502\r\n\u2502 16 from modules import localization, script_loading, errors, ui_component \u2502\r\n\u2502 17 from modules.paths_internal import models_path, script_path, data_path \u2502\r\n\u2502 \u2771 18 from ldm.models.diffusion.ddpm import LatentDiffusion \u2502\r\n\u2502 19 from typing import Optional \u2502\r\n\u2502 20 \u2502\r\n\u2502 21 demo = None \u2502\r\n\u2502 \u2502\r\n\u2502 /content/microsoftexcel/repositories/stable-diffusion-stability-ai/ldm/model \u2502\r\n\u2502 s/diffusion/ddpm.py:20 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 17 import itertools \u2502\r\n\u2502 18 from tqdm import tqdm \u2502\r\n\u2502 19 from torchvision.utils import make_grid \u2502\r\n\u2502 \u2771 20 from pytorch_lightning.utilities.distributed import rank_zero_only \u2502\r\n\u2502 21 from omegaconf import ListConfig \u2502\r\n\u2502 22 \u2502\r\n\u2502 23 from ldm.util import log_txt_as_img, exists, default, ismap, isimage, \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nModuleNotFoundError: No module named 'pytorch_lightning.utilities.distributed'\n```\n\n\n### Additional information\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fab73f2e7d388ca99cdb3d5de7f36c0b9a1a3b1c', 'files': [{'path': 'extensions-builtin/LDSR/sd_hijack_ddpm_v1.py', 'Loc': {'(None, None, None)': {'mod': [17]}}, 'status': 'modified'}, {'path': 'modules/models/diffusion/ddpm_edit.py', 'Loc': {'(None, None, None)': {'mod': [22]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/models/diffusion/ddpm_edit.py", "extensions-builtin/LDSR/sd_hijack_ddpm_v1.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "ef4c94e1cfe66299227aa95a28c2380d21cb1600", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3902", "iss_label": "", "title": "[Feature Request]: ", "body": "Finer control of CFG Scale? now it goes by 0.5 steps. I'm trying to replicate work i did on other app which have CFG scale control by 0.1. i cannot get the same result, of course. \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ui-config.json"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Config"}, "loctype": {"code": ["ui-config.json"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "bf30673f5132c8f28357b31224c54331e788d3e7", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/3301", "iss_label": "bug-report", "title": "Expected all tensors to be on the same device", "body": "RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument index in method wrapper__index_select)\r\n\r\nhow to pick the CUDA:0 \uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'bf30673f5132c8f28357b31224c54331e788d3e7', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 17)': {'mod': [17]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "39919c40dd18f5a14ae21403efea1b0f819756c7", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2190", "iss_label": "bug-report", "title": "How to use .ckpt model on repo", "body": "Hello everyone!\r\n\r\nI was able to train a custom model using Dreambooth and I now have a custom ckpt trained on myself. Where do I put this file to be able to use it in this repo?\r\n\r\nI dropped in into models but not sure what to do next?\r\n\r\nAppreciate any help", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '39919c40dd18f5a14ae21403efea1b0f819756c7', 'files': [{'path': 'models/Stable-diffusion', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["models/Stable-diffusion"]}} +{"organization": "AUTOMATIC1111", "repo_name": "stable-diffusion-webui", "base_commit": "556c36b9607e3f4eacdddc85f8e7a78b29476ea7", "iss_html_url": "https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1614", "iss_label": "enhancement", "title": "Feature request: GPU temperature control ", "body": "**Is your feature request related to a problem? Please describe.**\r\nI don't like 85 degrees (Celsius) on my GPU, especially if it lasts more than 30 minutes or even 1 hour\r\n\r\n**Describe the solution you'd like**\r\nIf temp on a GPU is more than {maxTemp} and it lasts {accumulateTempTime} it will pause processing for {cooldownTime} or until it cools to {minTemp}, so my GPU won't end up with exploding\r\n\r\n**Describe alternatives you've considered**\r\nNot pausing, but lowering the activity to a few tens of seconds per step.\r\n\r\n**Additional context**\r\nNot lowering it in hard core, but smartly lowering activity (using sth similar to PID), so the temp will stay at {desiredTemp}\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "w-e-w", "pro": "stable-diffusion-webui-GPU-temperature-protection"}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["stable-diffusion-webui-GPU-temperature-protection"]}} +{"organization": "python", "repo_name": "cpython", "base_commit": "c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba", "iss_html_url": "https://github.com/python/cpython/issues/39472", "iss_label": "docs", "title": "Wrong reference for specific minidom methods", "body": "BPO | [832251](https://bugs.python.org/issue832251)\n--- | :---\nNosy | @freddrake\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = 'https://github.com/freddrake'\nclosed_at = <Date 2004-04-01.04:19:08.000>\ncreated_at = <Date 2003-10-29.09:39:39.000>\nlabels = ['docs']\ntitle = 'Wrong reference for specific minidom methods'\nupdated_at = <Date 2004-04-01.04:19:08.000>\nuser = 'https://bugs.python.org/nerby'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2004-04-01.04:19:08.000>\nactor = 'fdrake'\nassignee = 'fdrake'\nclosed = True\nclosed_date = None\ncloser = None\ncomponents = ['Documentation']\ncreation = <Date 2003-10-29.09:39:39.000>\ncreator = 'nerby'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 832251\nkeywords = []\nmessage_count = 3.0\nmessages = ['18799', '18800', '18801']\nnosy_count = 2.0\nnosy_names = ['fdrake', 'nerby']\npr_nums = []\npriority = 'high'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue832251'\nversions = ['Python 2.3']\n```\n\n</p></details>\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c40b7afee28fb928fdc3f07a9a7e9d4ef17347ba', 'files': [{'path': 'Doc/lib/xmldomminidom.tex', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\ndoc\u95ee\u9898", "iss_reason": "2\ndoc\u9519\u8bef\uff0c\u4e0d\u662fbug", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Doc/lib/xmldomminidom.tex"]}} +{"organization": "python", "repo_name": "cpython", "base_commit": "5a65c2d43607a5033d7171445848cde21f07d81d", "iss_html_url": "https://github.com/python/cpython/issues/32681", "iss_label": "interpreter-core", "title": ".pyc writing/reading race condition (PR#189)", "body": "BPO | [210610](https://bugs.python.org/issue210610)\n--- | :---\nNosy | @gvanrossum\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = 'https://github.com/gvanrossum'\nclosed_at = <Date 2000-09-20.20:33:21.000>\ncreated_at = <Date 2000-07-31.21:05:42.000>\nlabels = ['interpreter-core']\ntitle = '.pyc writing/reading race condition (PR#189)'\nupdated_at = <Date 2000-09-20.20:33:21.000>\nuser = 'https://bugs.python.org/anonymous'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2000-09-20.20:33:21.000>\nactor = 'gvanrossum'\nassignee = 'gvanrossum'\nclosed = True\nclosed_date = None\ncloser = None\ncomponents = ['Interpreter Core']\ncreation = <Date 2000-07-31.21:05:42.000>\ncreator = 'anonymous'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 210610\nkeywords = []\nmessage_count = 4.0\nmessages = ['66', '67', '68', '69']\nnosy_count = 2.0\nnosy_names = ['gvanrossum', 'jhylton']\npr_nums = []\npriority = 'low'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue210610'\nversions = []\n```\n\n</p></details>\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5a65c2d43607a5033d7171445848cde21f07d81d', 'files': [{'path': 'Doc/library/os.rst', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": ["fcntl.h"], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["fcntl.h"], "doc": ["Doc/library/os.rst"], "test": [], "config": [], "asset": []}} +{"organization": "python", "repo_name": "cpython", "base_commit": "adf03c3544084359d89e7a0bc2a5aa0561f1a0f2", "iss_html_url": "https://github.com/python/cpython/issues/68620", "iss_label": "stdlib\nrelease-blocker", "title": "Upgrade windows builds to use OpenSSL 1.0.2c", "body": "BPO | [24432](https://bugs.python.org/issue24432)\n--- | :---\nNosy | @pfmoore, @pitrou, @larryhastings, @giampaolo, @tiran, @tjguk, @benjaminp, @ned-deily, @alex, @bitdancer, @zware, @zooba, @dstufft\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = 'https://github.com/zooba'\nclosed_at = <Date 2015-07-03.22:28:01.834>\ncreated_at = <Date 2015-06-11.15:05:25.361>\nlabels = ['library', 'release-blocker']\ntitle = 'Upgrade windows builds to use OpenSSL 1.0.2c'\nupdated_at = <Date 2015-07-04.06:47:41.096>\nuser = 'https://github.com/alex'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2015-07-04.06:47:41.096>\nactor = 'python-dev'\nassignee = 'steve.dower'\nclosed = True\nclosed_date = <Date 2015-07-03.22:28:01.834>\ncloser = 'steve.dower'\ncomponents = ['Library (Lib)']\ncreation = <Date 2015-06-11.15:05:25.361>\ncreator = 'alex'\ndependencies = []\nfiles = []\nhgrepos = []\nissue_num = 24432\nkeywords = ['security_issue']\nmessage_count = 29.0\nmessages = ['245173', '245178', '245283', '246116', '246133', '246136', '246143', '246172', '246182', '246185', '246189', '246190', '246195', '246205', '246209', '246210', '246211', '246212', '246213', '246214', '246215', '246216', '246221', '246222', '246224', '246225', '246227', '246228', '246240']\nnosy_count = 15.0\nnosy_names = ['paul.moore', 'janssen', 'pitrou', 'larry', 'giampaolo.rodola', 'christian.heimes', 'tim.golden', 'benjamin.peterson', 'ned.deily', 'alex', 'r.david.murray', 'python-dev', 'zach.ware', 'steve.dower', 'dstufft']\npr_nums = []\npriority = 'release blocker'\nresolution = 'fixed'\nstage = 'resolved'\nstatus = 'closed'\nsuperseder = None\ntype = None\nurl = 'https://bugs.python.org/issue24432'\nversions = ['Python 2.7', 'Python 3.4', 'Python 3.5', 'Python 3.6']\n```\n\n</p></details>\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'adf03c3544084359d89e7a0bc2a5aa0561f1a0f2', 'files': [{'path': 'PCbuild/get_externals.bat', 'Loc': {'(None, None, 57)': {'mod': [57]}}, 'status': 'modified'}, {'path': 'PCbuild/python.props', 'Loc': {'(None, None, 37)': {'mod': [37]}}, 'status': 'modified'}, {'path': 'PCbuild/readme.txt', 'Loc': {'(None, None, 200)': {'mod': [200]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["PCbuild/readme.txt"], "test": [], "config": ["PCbuild/get_externals.bat", "PCbuild/python.props"], "asset": []}} +{"organization": "python", "repo_name": "cpython", "base_commit": "5198a5c7aa77367765ae03542b561845094ca30d", "iss_html_url": "https://github.com/python/cpython/issues/48435", "iss_label": "type-bug\nstdlib\ntopic-regex", "title": "re module treats raw strings as normal strings", "body": "BPO | [4185](https://bugs.python.org/issue4185)\n--- | :---\nNosy | @gvanrossum, @loewis, @akuchling, @birkenfeld, @ezio-melotti\nFiles | <li>[raw-strings-with-re.txt](https://bugs.python.org/file11868/raw-strings-with-re.txt \"Uploaded as text/plain at 2008-10-23.03:55:27 by @ezio-melotti\"): Interactive Python session with more examples</li>\n\n<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>\n\n<details><summary>Show more details</summary><p>\n\nGitHub fields:\n```python\nassignee = 'https://github.com/akuchling'\nclosed_at = <Date 2009-01-01.12:00:35.699>\ncreated_at = <Date 2008-10-23.03:55:28.615>\nlabels = ['expert-regex', 'type-bug', 'library']\ntitle = 're module treats raw strings as normal strings'\nupdated_at = <Date 2009-01-01.12:00:35.697>\nuser = 'https://github.com/ezio-melotti'\n```\n\nbugs.python.org fields:\n```python\nactivity = <Date 2009-01-01.12:00:35.697>\nactor = 'georg.brandl'\nassignee = 'akuchling'\nclosed = True\nclosed_date = <Date 2009-01-01.12:00:35.699>\ncloser = 'georg.brandl'\ncomponents = ['Library (Lib)', 'Regular Expressions']\ncreation = <Date 2008-10-23.03:55:28.615>\ncreator = 'ezio.melotti'\ndependencies = []\nfiles = ['11868']\nhgrepos = []\nissue_num = 4185\nkeywords = []\nmessage_count = 8.0\nmessages = ['75133', '75134', '75135', '75760', '77502', '77562', '77575', '78699']\nnosy_count = 5.0\nnosy_names = ['gvanrossum', 'loewis', 'akuchling', 'georg.brandl', 'ezio.melotti']\npr_nums = []\npriority = 'normal'\nresolution = 'fixed'\nstage = None\nstatus = 'closed'\nsuperseder = None\ntype = 'behavior'\nurl = 'https://bugs.python.org/issue4185'\nversions = ['Python 2.6', 'Python 2.5', 'Python 2.4']\n```\n\n</p></details>\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5198a5c7aa77367765ae03542b561845094ca30d', 'files': [{'path': 'Doc/library/re.rst', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\nor\n3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["Doc/library/re.rst"], "test": [], "config": [], "asset": []}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "ab6bcb4968bef335175c0b01972657961b2b1250", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/565", "iss_label": "", "title": "[BUG/Help] <title>\u4f7f\u7528ptuning\u5fae\u8c03\u65f6\u62a5\u9519RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] ", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\nTraceback (most recent call last):\r\n File \"main.py\", line 429, in <module>\r\n main()\r\n File \"main.py\", line 112, in main\r\n tokenizer = AutoTokenizer.from_pretrained(model_args.model_name_or_path, trust_remote_code=True)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/models/auto/tokenization_auto.py\", line 679, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1804, in from_pretrained\r\n return cls._from_pretrained(\r\n File \"/root/miniconda3/lib/python3.8/site-packages/transformers/tokenization_utils_base.py\", line 1958, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 205, in __init__\r\n self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 61, in __init__\r\n self.text_tokenizer = TextTokenizer(vocab_file)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 22, in __init__\r\n self.sp.Load(model_path)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/sentencepiece/__init__.py\", line 905, in Load\r\n return self.LoadFromFile(model_file)\r\n File \"/root/miniconda3/lib/python3.8/site-packages/sentencepiece/__init__.py\", line 310, in LoadFromFile\r\n return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)\r\nRuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] \n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n\u4f7f\u7528ptuning\u5fae\u8c03\u65f6\u62a5\u9519\uff0c\u5df2\u7ecf\u662f\u6700\u65b0\u7248\u7684\u6a21\u578b\u6587\u4ef6\u4e86\n\n### Environment\n\n```markdown\nPyTorch 1.11.0\r\nPython 3.8(ubuntu20.04)\r\nCuda 11.3\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ice_text.model"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "801b1bb57690f0a99943f0a80c839b9ee120f3a7", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/388", "iss_label": "", "title": "\u4e3a\u4ec0\u4e48\u4e0d\u80fd\u7528\u5171\u4eabGPU\u5185\u5b58\u5462[Feature] <title>", "body": "### Is your feature request related to a problem? Please describe.\n\n\u4e3a\u4ec0\u4e48\u4e0d\u80fd\u7528\u5171\u4eabGPU\u5185\u5b58\u5462\r\n\u4e13\u75286G\u90fd\u6ee1\u4e86\u4f46\u662f\u5171\u4eabGPU\u5185\u5b58\u4e00\u70b9\u90fd\u6ca1\u52a8\r\n\n\n### Solutions\n\nemm\n\n### Additional context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "Jittor", "pro": "JittorLLMs"}], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["JittorLLMs"]}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "afe08a19ccadc8b238c218b245bb4c1c62598588", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/770", "iss_label": "", "title": "RuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] ", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u8fd0\u884cpython cli_demo.py\u62a5\u9519\r\n\r\nroot@4uot40mdrplpv-0:/yx/ChatGLM-6B# python mycli_demo.py\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nTraceback (most recent call last):\r\n File \"/yx/ChatGLM-6B/mycli_demo.py\", line 6, in <module>\r\n tokenizer = AutoTokenizer.from_pretrained(\"/yx/ChatGLM-6B/THUDM/chatglm-6b\", trust_remote_code=True)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/models/auto/tokenization_auto.py\", line 679, in from_pretrained\r\n return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 1804, in from_pretrained\r\n return cls._from_pretrained(\r\n ^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py\", line 1958, in _from_pretrained\r\n tokenizer = cls(*init_inputs, **init_kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 205, in __init__\r\n self.sp_tokenizer = SPTokenizer(vocab_file, num_image_tokens=num_image_tokens)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 61, in __init__\r\n self.text_tokenizer = TextTokenizer(vocab_file)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/root/.cache/huggingface/modules/transformers_modules/chatglm-6b/tokenization_chatglm.py\", line 22, in __init__\r\n self.sp.Load(model_path)\r\n File \"/usr/local/lib/python3.11/site-packages/sentencepiece/__init__.py\", line 905, in Load\r\n return self.LoadFromFile(model_file)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/local/lib/python3.11/site-packages/sentencepiece/__init__.py\", line 310, in LoadFromFile\r\n return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\nRuntimeError: Internal: src/sentencepiece_processor.cc(1101) [model_proto->ParseFromArray(serialized.data(), serialized.size())] \r\n\r\n\u6211\u662f\u5728docker\u4e2d\u8fd0\u884c\u7684, \u9ebb\u70e6\u770b\u770b\u662f\u600e\u4e48\u56de\u4e8b, \u8c22\u8c22\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nhelp\n\n### Environment\n\n```markdown\n- OS:Red Hat 4.8.5-44\r\n- Python:3.11\r\n- Transformers:4.27.1\r\n- PyTorch:2.0\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :False\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ice_text.model"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "d11eb5213e3c17225b47bb806a120dd45a80b126", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/63", "iss_label": "", "title": "How to fix error like this: torch.cuda.OutOfMemoryError: CUDA out of memory ?", "body": "OS: ubuntu 20.04\r\nThe error message said we need to change value of max_split_size_mb, but I search source code and cannot find any file contains max_split_size_mb, would you please provide some guidance about how to fix?\r\n```\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8/8 [00:16<00:00, 2.09s/it]\r\nTraceback (most recent call last):\r\n File \"/home/zhangclb/sandbox/ai_llm/ChatGLM-6B/cli_demo.py\", line 6, in <module>\r\n model = AutoModel.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True).half().cuda()\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 749, in cuda\r\n return self._apply(lambda t: t.cuda(device))\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 641, in _apply\r\n module._apply(fn)\r\n [Previous line repeated 2 more times]\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 664, in _apply\r\n param_applied = fn(param)\r\n File \"/home/zhangclb/.local/env39/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 749, in <lambda>\r\n return self._apply(lambda t: t.cuda(device))\r\ntorch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 128.00 MiB (GPU 0; 1.83 GiB total capacity; 1.27 GiB already allocated; 57.19 MiB free; 1.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd11eb5213e3c17225b47bb806a120dd45a80b126', 'files': [{'path': 'cli_demo.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["cli_demo.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "a9fc0184446fba7f4f27addf519fea0b371df83a", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/417", "iss_label": "", "title": "[Help] <title> Oracle Linux 7.9 \u8fd0\u884cint4\u6a21\u578b\u51fa\u9519\uff0cAttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n/x/home/chatglm_env/lib/python3.7/site-packages/requests/__init__.py:104: RequestsDependencyWarning: urllib3 (1.26.14) or chardet (5.1.0)/charset_normalizer (2.0.12) doesn't match a supported version!\r\n RequestsDependencyWarning)\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\n/x/home/chatglm_env/lib/python3.7/site-packages/torchvision/io/image.py:13: UserWarning: Failed to load image Python extension: libc10_cuda.so: cannot open shared object file: No such file or directory\r\n warn(f\"Failed to load image Python extension: {e}\")\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nNo compiled kernel found.\r\nCompiling kernels : /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.c\r\nCompiling gcc -O3 -fPIC -pthread -fopenmp -std=c99 /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.c -shared -o /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels_parallel.so\r\nsh: gcc: command not found\r\nCompile failed, using default cpu kernel code.\r\nCompiling gcc -O3 -fPIC -std=c99 /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.c -shared -o /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.so\r\nKernels compiled : /x/home/.cache/huggingface/modules/transformers_modules/local/quantization_kernels.so\r\nCannot load cpu kernel, don't use quantized model on cpu.\r\nUsing quantization cache\r\nApplying quantization to glm layers\r\nTraceback (most recent call last):\r\n File \"chatglm-int4-demo.py\", line 8, in <module>\r\n response, history = model.chat(tokenizer, '\u4f60\u597d', history=[])\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 1137, in chat\r\n outputs = self.generate(**input_ids, **gen_kwargs)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/autograd/grad_mode.py\", line 27, in decorate_context\r\n return func(*args, **kwargs)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/transformers/generation/utils.py\", line 1447, in generate\r\n **model_kwargs,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/transformers/generation/utils.py\", line 2447, in sample\r\n output_hidden_states=output_hidden_states,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 1051, in forward\r\n return_dict=return_dict,\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 887, in forward\r\n output_attentions=output_attentions\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 588, in forward\r\n output_attentions=output_attentions\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/modeling_chatglm.py\", line 406, in forward\r\n mixed_raw_layer = self.query_key_value(hidden_states)\r\n File \"/x/home/chatglm_env/lib/python3.7/site-packages/torch/nn/modules/module.py\", line 1194, in _call_impl\r\n return forward_call(*input, **kwargs)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 334, in forward\r\n output = W8A16LinearCPU.apply(input, self.weight, self.weight_scale, self.weight_bit_width, self.quantization_cache)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 74, in forward\r\n weight = extract_weight_to_float(quant_w, scale_w, weight_bit_width, quantization_cache=quantization_cache)\r\n File \"/x/home/.cache/huggingface/modules/transformers_modules/local/quantization.py\", line 256, in extract_weight_to_float\r\n func = cpu_kernels.int4WeightExtractionFloat\r\nAttributeError: 'NoneType' object has no attribute 'int4WeightExtractionFloat'\r\n\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nfrom transformers import AutoTokenizer, AutoModel\r\n\r\nmodel_path = '/x/home/chatglm-6b-int4'\r\n\r\ntokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)\r\nmodel = AutoModel.from_pretrained(model_path, trust_remote_code=True).float()\r\n\r\nresponse, history = model.chat(tokenizer, '\u4f60\u597d', history=[])\r\n\n\n### Environment\n\n```markdown\n- OS: Oracle 7.9\r\n- Python: 3.7.13\r\n- Transformers: 2.6.1\r\n- PyTorch: 1.13.1\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : no cuda, use cpu\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"pro": "gcc"}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "2", "info_type": "\u5e93"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["gcc"]}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "0c6d1750ef6042338534c3c97002175fa1ae0499", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/10", "iss_label": "question", "title": "\u53ef\u4ee5\u4f7f\u7528\u81ea\u5df1\u7684\u6570\u636e\u5fae\u8c03\u5417", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0c6d1750ef6042338534c3c97002175fa1ae0499', 'files': [{'path': 'ptuning/', 'Loc': {}}, {'path': 'ptuning/', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ptuning/"]}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "c55ecd89a079b86620cc722f2e21a14e3718d0f3", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/39", "iss_label": "", "title": "6GB\u663e\u5361\u63d0\u793a\u663e\u5b58\u4e0d\u8db3", "body": "\u663e\u5361\uff1a3060laptop 6GB\r\n\u62a5\u9519\uff1aRuntimeError: CUDA out of memory. Tried to allocate 96.00 MiB (GPU 0; 6.00 GiB total capacity; 5.27 GiB already allocated; 0 bytes free; 5.28 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c55ecd89a079b86620cc722f2e21a14e3718d0f3', 'files': [{'path': 'web_demo.py', 'Loc': {'(None, None, None)': {'mod': [5]}}, 'status': 'modified'}, {'path': 'cli_demo.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["web_demo.py", "cli_demo.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "1d87dac585c8fafd708db16860b628928ec5a821", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/532", "iss_label": "", "title": "[BUG/Help] \u8fd9\u4e24\u5929\u66f4\u65b0\u7248\u672c\u540e\uff0cchat\u7684\u5fae\u8c03\u597d\u50cf\u7528\u4e0d\u4e86\u4e86", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u524d\u51e0\u5929\u4f7f\u7528chat\u5fae\u8c03\u8fd8\u662f\u53ef\u4ee5\u7528\u7684\uff0c\u90a3\u65f6\u5019output\u6587\u4ef6\u662f\u5b8c\u6574\u7684\u5305\uff0c\u800c\u4e0d\u662f\u589e\u91cf\u5fae\u8c03\u5305\u3002\r\n\u8fd9\u4e24\u5929\u66f4\u65b0\u540e\uff0c\u4f7f\u7528\u7684\u8fd8\u662f\u9879\u76ee\u81ea\u5e26\u7684train_chat.sh\uff0c\u6a21\u578b\u7528\u7684\u662fint4\u3002\r\noutput\u6587\u4ef6\u786e\u5b9e\u5c0f\u4e86\uff0c\u4f46\u662f\u5374\u65e0\u6cd5\u8fd0\u884c\u4e86\uff0c\u5177\u4f53\u5f62\u5f0f\u4e3a\u8fd0\u884c\u4ee5\u4e0b\u4ee3\u7801\r\n```python\r\nfrom transformers import AutoTokenizer, AutoModel\r\ntokenizer = AutoTokenizer.from_pretrained(\"/content/ChatGLM-6B/ptuning/output/chattm/checkpoint-50\", trust_remote_code=True)\r\nmodel = AutoModel.from_pretrained(\"/content/ChatGLM-6B/ptuning/output/chattm/checkpoint-50\", trust_remote_code=True).half().cuda()\r\nmodel = model.eval()\r\nresponse, history = model.chat(tokenizer, \"\u4f60\u597d\", history=[])\r\nprint(response)\r\n```\r\n\u62a5\u4ee5\u4e0b\u5185\u5bb9\u540e\u65e0\u53cd\u5e94\uff0c\u81f3\u5c115\u5206\u949f\u3002\u671f\u95f4\u663e\u5b58\u4e00\u76f4\u5728\u4e0a\u5347\r\nYou should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.\r\nThe dtype of attention mask (torch.int64) is not bool\r\n\u6700\u7ec8\u62a5\u9519\r\n2023-04-11 13:51:41.577016: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n-\n\n### Environment\n\n```markdown\ncolab pro \u9ed8\u8ba4\u73af\u5883\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1d87dac585c8fafd708db16860b628928ec5a821', 'files': [{'path': 'ptuning/main.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["ptuning/main.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "edb127326a2d5afd855484f12a38e0119151f826", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/723", "iss_label": "", "title": "centos\u4e0a\uff0c2\u4e2a12g\u663e\u5b58\u7684\u663e\u5361\u5982\u4f55\u914d\u7f6e\u53ef\u4ee5\u540c\u65f6\u4f7f\u7528", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\ncentos\u4e0a\uff0c2\u4e2a12g\u663e\u5b58\u7684\u663e\u5361\uff0c\u65e0\u8bba\u662f\u8bad\u7ec3\u8fd8\u662fweb\uff0c\u90fd\u59cb\u7ec8\u75280\u53f7\u663e\u5361\uff0c\u5982\u4f55\u914d\u7f6e\u53ef\u4ee5\u540c\u65f6\u4f7f\u7528\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nCentos7\r\n12G nvida *2\n\n### Environment\n\n```markdown\n- OS:Centos7\r\n- Python:3.8\r\n- Transformers:4.26.1\r\n- PyTorch: 1.12\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'edb127326a2d5afd855484f12a38e0119151f826', 'files': [{'path': 'ptuning/train.sh', 'Loc': {'(None, None, 4)': {'mod': [4]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nOther \u811a\u672c"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["ptuning/train.sh"]}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "801b1bb57690f0a99943f0a80c839b9ee120f3a7", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/394", "iss_label": "", "title": "[BUG/Help] ValueError: 150000 is not in list", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n 0%| | 19/30000 [31:30<828:54:23, 99.53s/it]\r\n 0%| | 20/30000 [33:09<828:37:17, 99.50s/it]\r\n 0%| | 21/30000 [34:48<828:09:42, 99.45s/it]Traceback (most recent call last):\r\n File \"/root/projects/ChatGLM-6B/ptuning/main.py\", line 393, in <module>\r\n main()\r\n File \"/root/projects/ChatGLM-6B/ptuning/main.py\", line 332, in main\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 1633, in train\r\n return inner_training_loop(\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 1902, in _inner_training_loop\r\n tr_loss_step = self.training_step(model, inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 2645, in training_step\r\n loss = self.compute_loss(model, inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/transformers/trainer.py\", line 2677, in compute_loss\r\n outputs = model(**inputs)\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 1160, in forward\r\n transformer_outputs = self.transformer(\r\n File \"/root/anaconda3/envs/torch10/lib/python3.9/site-packages/torch/nn/modules/module.py\", line 1501, in _call_impl\r\n return forward_call(*args, **kwargs)\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 928, in forward\r\n mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]\r\n File \"/root/.cache/huggingface/modules/transformers_modules/modeling_chatglm.py\", line 928, in <listcomp>\r\n mask_positions = [seq.tolist().index(mask_token) for seq in input_ids]\r\nValueError: 150000 is not in list\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\nPRE_SEQ_LEN=8\r\nLR=1e-2\r\n\r\nCUDA_VISIBLE_DEVICES=0 python3 main.py \\\r\n --do_train \\\r\n --train_file ../data/train.json \\\r\n --validation_file ../data/dev.json \\\r\n --prompt_column instruction \\\r\n --response_column output \\\r\n --overwrite_cache \\\r\n --model_name_or_path ~/projects/zero_nlp/simple_thu_chatglm6b/thuglm/ \\\r\n --output_dir output/adgen-chatglm-6b-pt-$PRE_SEQ_LEN-$LR \\\r\n --overwrite_output_dir \\\r\n --max_source_length 64 \\\r\n --max_target_length 64 \\\r\n --per_device_train_batch_size 100 \\\r\n --per_device_eval_batch_size 100 \\\r\n --gradient_accumulation_steps 16 \\\r\n --predict_with_generate \\\r\n --max_steps 30000 \\\r\n --logging_steps 100 \\\r\n --save_steps 100 \\\r\n --learning_rate $LR \\\r\n --pre_seq_len $PRE_SEQ_LEN \\\r\n # --quantization_bit 4\r\n\r\n\r\n\n\n### Environment\n\n```markdown\n- OS: centos8\r\n- Python: 3.9\r\n- Transformers: 4.27.1\r\n- PyTorch:2.0.0\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ice_text.model", "modeling_chatglm.py"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0\n2", "info_type": "Code"}, "loctype": {"code": ["modeling_chatglm.py"], "doc": [], "test": [], "config": [], "asset": ["ice_text.model"]}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "1047e446e5387aa06c856c95800f67beab8b80d4", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/224", "iss_label": "", "title": "[BUG/Help] ImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n>>> model = AutoModel.from_pretrained(\"THUDM/chatglm-6b-int4\",trust_remote_code=True).float()\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 456, in from_pretrained\r\n logger.warning(\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\dynamic_module_utils.py\", line 374, in get_class_from_dynamic_module\r\n\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\dynamic_module_utils.py\", line 147, in get_class_in_module\r\n def get_class_in_module(class_name, module_path):\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\importlib\\__init__.py\", line 127, in import_module\r\n return _bootstrap._gcd_import(name[level:], package, level)\r\n File \"<frozen importlib._bootstrap>\", line 1006, in _gcd_import\r\n File \"<frozen importlib._bootstrap>\", line 983, in _find_and_load\r\n File \"<frozen importlib._bootstrap>\", line 967, in _find_and_load_unlocked\r\n File \"<frozen importlib._bootstrap>\", line 677, in _load_unlocked\r\n File \"<frozen importlib._bootstrap_external>\", line 728, in exec_module\r\n File \"<frozen importlib._bootstrap>\", line 219, in _call_with_frames_removed\r\n File \"C:\\Users\\mina_/.cache\\huggingface\\modules\\transformers_modules\\THUDM\\chatglm-6b-int4\\dac03c3ac833dab2845a569a9b7f6ac4e8c5dc9b\\modeling_chatglm.py\", line 30, in <module>\r\n from transformers.generation.utils import LogitsProcessorList, StoppingCriteriaList, GenerationConfig\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\generation\\utils.py\", line 39, in <module>\r\n from .configuration_utils import GenerationConfig\r\n File \"C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\generation\\configuration_utils.py\", line 24, in <module>\r\n from ..utils import (\r\nImportError: cannot import name 'GENERATION_CONFIG_NAME' from 'transformers.utils' (C:\\Users\\mina_\\Anaconda3\\envs\\ChatGLM-6B\\lib\\site-packages\\transformers\\utils\\__init__.py)\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n1. `conda activate chatglm-6b`\r\n2. `from transformers import AutoTokenizer, AutoModel`\r\n3. `tokenizer = AutoTokenizer.from_pretrained(\"THUDM/chatglm-6b\", trust_remote_code=True)`\r\n4. `model = AutoModel.from_pretrained(\"THUDM/chatglm-6b-int4\",trust_remote_code=True).float()`\r\n5. See this issue.\n\n### Environment\n\n```markdown\n- OS: Windows 10\r\n- Python: 3.7.5\r\n- Transformers:\r\n- PyTorch:\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) : False\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1047e446e5387aa06c856c95800f67beab8b80d4', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "b65142b5e54e52b27c1c1269e1b4abd83efcce45", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/422", "iss_label": "", "title": "[BUG/Help] <title>KeyError: 'lm_head.weight'", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u62a5\u9519\uff1aKeyError: 'lm_head.weight'\n\n### Expected Behavior\n\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a configuration with custom code to ensure no malicious code has been contributed in a newer revision.\r\nExplicitly passing a `revision` is encouraged when loading a model with custom code to ensure no malicious code has been contributed in a newer revision.\r\n\r\nLoading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]\r\nLoading checkpoint shards: 0%| | 0/8 [00:00<?, ?it/s]\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Administrator\\Downloads\\ChatGLM-6B-main\\cli_demo.py\", line 7, in <module>\r\n model = AutoModel.from_pretrained(r\"C:\\Users\\Administrator\\Downloads\\ChatGLM-6B-main\\model\",trust_remote_code=True,ignore_mismatched_sizes=True).half().quantize(4).cuda()\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\models\\auto\\auto_factory.py\", line 466, in from_pretrained\r\n return model_class.from_pretrained(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2646, in from_pretrained\r\n ) = cls._load_pretrained_model(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2959, in _load_pretrained_model\r\n mismatched_keys += _find_mismatched_keys(\r\n File \"C:\\Program Files\\Python310\\lib\\site-packages\\transformers\\modeling_utils.py\", line 2882, in _find_mismatched_keys\r\n and state_dict[checkpoint_key].shape != model_state_dict[model_key].shape\r\nKeyError: 'lm_head.weight'\r\n\n\n### Steps To Reproduce\n\n\u62a5\u9519\uff1aKeyError: 'lm_head.weight'\n\n### Environment\n\n```markdown\n- OS:windows 10\r\n- Python:3.10\r\n- Transformers:4.27.1\r\n- PyTorch:cu118\r\n- CUDA Support True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["pytorch_model-00001-of-00008.bin", "pytorch_model-00008-of-00008.bin"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Models/\u6570\u636e"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["pytorch_model-00001-of-00008.bin", "pytorch_model-00008-of-00008.bin"]}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "8633db1503fc3b0edc1d035f64aa35dce5d97969", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/622", "iss_label": "", "title": "[BUG/Help] ptuning\u65f6\uff0c\u6307\u5b9aPRE_SEQ_LEN=512\uff0c\u8bad\u7ec3\u540e\uff0c\u56de\u7b54\u7684\u95ee\u9898\u4ecd\u65e7\u6709\u56de\u7b54\u4e00\u767e\u5b57\u5de6\u53f3\u5c31\u65ad\u4e86\uff0c\u8be5\u5982\u4f55\u8c03\u6574\uff1f", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u8bad\u7ec3\u53c2\u6570\u5982\u4e0b\uff1a\r\nPRE_SEQ_LEN=512\r\nLR=2e-2\r\n\r\nCUDA_VISIBLE_DEVICES=0 python3 main.py \\\r\n --do_train \\\r\n --train_file ./data/gwddc.json \\\r\n --validation_file ./data/gwddc_test.json \\\r\n --prompt_column instruction \\\r\n --response_column output \\\r\n --overwrite_cache \\\r\n --model_name_or_path THUDM/chatglm-6b \\\r\n --output_dir output/adgen-chatglm-6b-pt-gwddc-v3 \\\r\n --overwrite_output_dir \\\r\n --max_source_length 64 \\\r\n --max_target_length 64 \\\r\n --per_device_train_batch_size 4 \\\r\n --per_device_eval_batch_size 1 \\\r\n --gradient_accumulation_steps 16 \\\r\n --predict_with_generate \\\r\n --max_steps 3000 \\\r\n --logging_steps 10 \\\r\n --save_steps 1000 \\\r\n --learning_rate $LR \\\r\n --pre_seq_len $PRE_SEQ_LEN\r\n\r\n\u8bad\u7ec3\u6210\u529f\uff0c\u52a0\u8f7dcheckpoint\u6a21\u578b\u4e5f\u6210\u529f\uff0c\u8f93\u5165prompts\u4e5f\u80fd\u6b63\u5e38\u56de\u7b54\uff0c\u53ef\u662f\uff0c\u56de\u7b54\u957f\u5ea6\u4ecd\u65e7\u5f88\u77ed\uff0c\u8fd8\u4f1a\u51fa\u73b0\u56de\u7b54\u534a\u622a\u65ad\u6389\u7684\u60c5\u51b5\uff0c\u8bf7\u95ee\u8be5\u5982\u4f55\u8c03\u6574\u8bad\u7ec3\u53c2\u6570\uff1f\r\n\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n1. ./data/gwddc.json\u4e3a\u81ea\u5907\u7684\u8bad\u7ec3\u96c6\uff0cprompts\u53ea\u6709\u4e0d\u52302000\u6761\r\n2. \u8f93\u5165\u4e0a\u8ff0\u53c2\u6570\u5e76\u8fd0\u884c\uff0c\u8bad\u7ec3\u7ed3\u679c\u4fe1\u606f\u5982\u4e0b\uff1a\r\n\u2026\u2026\r\n{'loss': 0.0371, 'learning_rate': 0.0, 'epoch': 96.77}\r\nSaving PrefixEncoder\r\n{'train_runtime': 21212.1807, 'train_samples_per_second': 9.051, 'train_steps_per_second': 0.141, 'train_loss': 0.2381483610868454, 'epoch': 96.77}\r\n***** train metrics *****\r\n epoch = 96.77\r\n train_loss = 0.2381\r\n train_runtime = 5:53:32.18\r\n train_samples = 1982\r\n train_samples_per_second = 9.051\r\n train_steps_per_second = 0.141\r\n\u5e2e\u770b\u662f\u4e0d\u662ftrain_loss\u4e0d\u884c\uff1f\u9700\u8981\u589e\u52a0\u8fed\u4ee3\u6b21\u6570\uff1f\n\n### Environment\n\n```markdown\n- OS:centos 7.6\r\n- Python:3.9\r\n- Transformers:4.27.1\r\n- PyTorch:2.0.0+cu117\r\n- CUDA Support (`python -c \"import torch; print(torch.cuda.is_available())\"`) :True\n```\n\n\n### Anything else?\n\n\u53e6\u5916\uff0c\u7279\u522b\u8bad\u7ec3\u4e86\u201c\u4f60\u662f\u8c01\u201d\uff0c\u90e8\u7f72\u540e\uff0c\u4e5f\u6ca1\u751f\u6548\u3002", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8633db1503fc3b0edc1d035f64aa35dce5d97969', 'files': [{'path': 'ptuning/README.md', 'Loc': {'(None, None, 180)': {'mod': [180]}}, 'status': 'modified'}, {'path': 'ptuning/arguments.py', 'Loc': {\"('DataTrainingArguments', None, 65)\": {'mod': [123]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": ["ptuning/arguments.py"], "doc": ["ptuning/README.md"], "test": [], "config": [], "asset": []}} +{"organization": "THUDM", "repo_name": "ChatGLM-6B", "base_commit": "a14bc1d32452d92613551eb5d523e00950913710", "iss_html_url": "https://github.com/THUDM/ChatGLM-6B/issues/353", "iss_label": "enhancement", "title": "[Help] \u5982\u4f55\u652f\u6301\u591a\u663e\u5361", "body": "### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Current Behavior\n\n\u516c\u53f8\u5185\u90e8\u4f7f\u7528\uff0c\u88c5\u4e862\u5361\uff0c\u53d1\u73b0\u9ed8\u8ba4\u914d\u7f6e\u53ea\u67091\u5361\u5728\u8dd1\uff0c\u8bf7\u95ee\u5982\u4f55\u4f7f\u7528\u624d\u53ef\u4ee5\u4f7f\u7528\u591a\u5361\n\n### Expected Behavior\n\n_No response_\n\n### Steps To Reproduce\n\n\u65e0\n\n### Environment\n\n```markdown\nOS: Ubuntu 20.04\r\nPython: 3.8\r\nTransformers: 4.26.1\r\nPyTorch: 1.12\r\nCUDA Support: True\n```\n\n\n### Anything else?\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a14bc1d32452d92613551eb5d523e00950913710', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u5982\u4f55\u652f\u6301\u591a\u663e\u5361", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "34f28b2a1342fd72c2e4d4e5613855bfb9f35d34", "iss_html_url": "https://github.com/huggingface/transformers/issues/1225", "iss_label": "wontfix", "title": "Bert output last hidden state", "body": "## \u2753 Questions & Help\r\n\r\nHi,\r\n\r\nSuppose we have an utterance of length 24 (considering special tokens) and we right-pad it with 0 to max length of 64.\r\nIf we use Bert pertained model to get the last hidden states, the output would be of size [1, 64, 768]. \r\nCan we use just the first 24 as the hidden states of the utterance? I mean is it right to say that the output[0, :24, :] has all the required information?\r\nI realized that from index 24:64, the outputs has float values as well.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '34f28b2a1342fd72c2e4d4e5613855bfb9f35d34', 'files': [{'path': 'src/transformers/models/bert/modeling_bert.py', 'Loc': {\"('BertSelfAttention', 'forward', 276)\": {'mod': [279]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/models/bert/modeling_bert.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "82c7e879876822864b5ceaf2c99eb01159266bcd", "iss_html_url": "https://github.com/huggingface/transformers/issues/27200", "iss_label": "", "title": "dataset download error in speech recognition examples", "body": "### System Info\n\n- `transformers` version: 4.35.0.dev0\r\n- Platform: Linux-5.15.0-43-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.18\r\n- Huggingface_hub version: 0.17.3\r\n- Safetensors version: 0.4.0\r\n- Accelerate version: 0.24.1\r\n- Accelerate config: not found\r\n- PyTorch version (GPU?): 1.10.0+cu111 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: <fill in>\r\n- Using distributed or parallel set-up in script?: <fill in>\n\n### Who can help?\n\n@stevhliu and @MKhalusova\n\n### Information\n\n- [x] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nCUDA_VISIBLE_DEVICES=0 python run_speech_recognition_ctc.py \\\r\n\t--dataset_name=\"common_voice\" \\\r\n\t--model_name_or_path=\"facebook/wav2vec2-large-xlsr-53\" \\\r\n\t--dataset_config_name=\"tr\" \\\r\n\t--output_dir=\"./wav2vec2-common_voice-tr-demo\" \\\r\n\t--overwrite_output_dir \\\r\n\t--num_train_epochs=\"15\" \\\r\n\t--per_device_train_batch_size=\"16\" \\\r\n\t--gradient_accumulation_steps=\"2\" \\\r\n\t--learning_rate=\"3e-4\" \\\r\n\t--warmup_steps=\"500\" \\\r\n\t--evaluation_strategy=\"steps\" \\\r\n\t--text_column_name=\"sentence\" \\\r\n\t--length_column_name=\"input_length\" \\\r\n\t--save_steps=\"400\" \\\r\n\t--eval_steps=\"100\" \\\r\n\t--layerdrop=\"0.0\" \\\r\n\t--save_total_limit=\"3\" \\\r\n\t--freeze_feature_encoder \\\r\n\t--gradient_checkpointing \\\r\n\t--chars_to_ignore , ? . ! - \\; \\: \\\" \u201c % \u2018 \u201d \ufffd \\\r\n\t--fp16 \\\r\n\t--group_by_length \\\r\n\t--push_to_hub \\\r\n\t--do_train --do_eval \n\n### Expected behavior\n\nWhen I run the default command, which set `dataset_name` as \"common_voice\", and I got a warning:\r\n```\r\n/home/xintong/.cache/huggingface/modules/datasets_modules/datasets/common_voice/220833898d6a60c50f621126e51fb22eb2dfe5244392c70dccd8e6e2f055f4bf/common_voice.py:634: FutureWarning: \r\n This version of the Common Voice dataset is deprecated.\r\n You can download the latest one with\r\n >>> load_dataset(\"mozilla-foundation/common_voice_11_0\", \"en\")\r\n \r\n warnings.warn(\r\nGenerating train split: 0%| | 0/1831 [00:00<?, ? examples/s]\r\nTraceback (most recent call last):\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 2578, in next\r\n tarinfo = self.tarinfo.fromtarfile(self)\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 1283, in fromtarfile\r\n obj = cls.frombuf(buf, tarfile.encoding, tarfile.errors)\r\n File \"/home/xintong/miniconda3/envs/test/lib/python3.8/tarfile.py\", line 1221, in frombuf\r\n raise TruncatedHeaderError(\"truncated header\")\r\ntarfile.TruncatedHeaderError: truncated header\r\n```\r\nI modified this into `mozilla-foundation/common_voice_11_0`, it passed. \r\n```\r\nDownloading builder script: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 8.13k/8.13k [00:00<00:00, 30.3MB/s]\r\nDownloading readme: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 14.4k/14.4k [00:00<00:00, 19.2MB/s]\r\nDownloading extra modules: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 3.44k/3.44k [00:00<00:00, 19.9MB/s]\r\nDownloading extra modules: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 60.9k/60.9k [00:00<00:00, 304kB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 12.2k/12.2k [00:00<00:00, 25.6MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 568M/568M [00:07<00:00, 71.7MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 233M/233M [00:02<00:00, 78.6MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 285M/285M [00:04<00:00, 67.7MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 4.86M/4.86M [00:00<00:00, 73.3MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 109M/109M [00:01<00:00, 80.4MB/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:21<00:00, 4.24s/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:07<00:00, 1.54s/it]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5.76M/5.76M [00:00<00:00, 56.0MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.17M/2.17M [00:00<00:00, 54.1MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 2.18M/2.18M [00:00<00:00, 64.3MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 32.8k/32.8k [00:00<00:00, 53.1MB/s]\r\nDownloading data: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 800k/800k [00:00<00:00, 59.8MB/s]\r\nDownloading data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:05<00:00, 1.01s/it]\r\nExtracting data files: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 5/5 [00:00<00:00, 2954.98it/s]\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '82c7e879876822864b5ceaf2c99eb01159266bcd', 'files': [{'path': 'examples/pytorch/speech-recognition/README.md', 'Loc': {'(None, None, 69)': {'mod': [69]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["examples/pytorch/speech-recognition/README.md"], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494", "iss_html_url": "https://github.com/huggingface/transformers/issues/12081", "iss_label": "", "title": "GPT2 Flax \"TypeError: JAX only supports number and bool dtypes, got dtype object in array\"", "body": "On GPU\r\n\r\n```\r\n>>> from transformers import AutoTokenizer, FlaxAutoModelForCausalLM\r\n\r\n>>> tokenizer = AutoTokenizer.from_pretrained(\"gpt2-medium\")\r\n>>> model = FlaxAutoModelForCausalLM.from_pretrained(\"gpt2-medium\")\r\n>>> input_context = \"The dog\"\r\n>>> # encode input context\r\n>>> input_ids = tokenizer(input_context, return_tensors=\"jax\").input_ids\r\n>>> # generate candidates using sampling\r\n>>> outputs = model.generate(input_ids=input_ids, max_length=20, top_k=30, do_sample=True)\r\n\r\nTypeError: JAX only supports number and bool dtypes, got dtype object in array\r\n```\r\n\r\n@patrickvonplaten @patil-suraj ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0e82f0cbc28b41b3d87a5e4069dc0e20bacc2494', 'files': [{'path': 'src/transformers/models/gpt2/modeling_flax_gpt2.py', 'Loc': {\"('FlaxGPT2LMHeadModule', None, 553)\": {'mod': []}}, 'status': 'modified'}, {'path': 'src/transformers/models/gpt2/tokenization_gpt2_fast.py', 'Loc': {\"('GPT2TokenizerFast', None, 70)\": {'mod': []}}, 'status': 'modified'}, {'Loc': [6, 7], 'path': None}]}", "own_code_loc": [{"Loc": [6, 7], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "src/transformers/models/gpt2/tokenization_gpt2_fast.py", "src/transformers/models/gpt2/modeling_flax_gpt2.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "322037e842e5e89080918c824998c17722df6f19", "iss_html_url": "https://github.com/huggingface/transformers/issues/10079", "iss_label": "", "title": "Unclear error \"NotImplementedError: \"while saving tokenizer. How fix it?", "body": "Here is my tokenizer code and how I save it to a json file\" /content/bert-datas7.json\"\r\n\r\n````\r\nfrom tokenizers import normalizers\r\nfrom tokenizers.normalizers import Lowercase, NFD, StripAccents\r\n\r\nbert_tokenizer.pre_tokenizer = Whitespace()\r\n\r\nfrom tokenizers.processors import TemplateProcessing\r\n\r\nbert_tokenizer.post_processor = TemplateProcessing(\r\n single=\"[CLS] $A [SEP]\",\r\n pair=\"[CLS] $A [SEP] $B:1 [SEP]:1\",\r\n special_tokens=[\r\n (\"[CLS]\", 1),\r\n (\"[SEP]\", 2),\r\n (\"[PAD]\", 3),\r\n ],\r\n \r\n)\r\nfrom tokenizers.trainers import WordPieceTrainer\r\n\r\ntrainer = WordPieceTrainer(\r\n vocab_size=30522, special_tokens=[\"[UNK]\", \"[CLS]\", \"[SEP]\", \"[PAD]\", \"[MASK]\"], pad_to_max_length=True\r\n)\r\nfiles = [f\"/content/For_ITMO.txt\" for split in [\"test\", \"train\", \"valid\"]]\r\nbert_tokenizer.train(trainer, files)\r\n\r\nmodel_files = bert_tokenizer.model.save(\"data\", \"/content/For_ITMO.txt\")\r\n\r\nbert_tokenizer.model = WordPiece.from_file(*model_files, unk_token=\"[UNK]\", pad_to_max_length=True)\r\n\r\nbert_tokenizer.save(\"/content/bert-datas7.json\") \r\n````\r\n\r\nWhen I output tokenizer name_or_path = nothing is displayed. This is normal?\r\n\r\n\r\n````\r\ntokenizer = PreTrainedTokenizerFast(tokenizer_file='/content/bert-datas7.json')\r\ntokenizer.add_special_tokens({'pad_token': '[PAD]'})\r\n\r\nprint(tokenizer)\r\n>>> PreTrainedTokenizerFast(name_or_path='', vocab_size=1435, model_max_len=1000000000000000019884624838656, is_fast=True, padding_side='right', special_tokens={'pad_token': '[PAD]'})\r\n````\r\nAlso, when I try to save my tokenizer, I get an error without explanation. How can I rewrite the code so that all this???\r\n#9658 \r\n#10039 \r\n[For_ITMO.txt-vocab (1) (1).txt](https://github.com/huggingface/transformers/files/5945659/For_ITMO.txt-vocab.1.1.txt)\r\n \r\n````\r\ntokenizer.save_pretrained(\"/content/tokennizerrrr\")\r\n\r\nNotImplementedError Traceback (most recent call last)\r\n<ipython-input-11-efc48254a528> in <module>()\r\n----> 1 tokenizer.save_pretrained(\"/content/tokennizerrrr\")\r\n\r\n2 frames\r\n/usr/local/lib/python3.6/dist-packages/transformers/tokenization_utils_base.py in save_vocabulary(self, save_directory, filename_prefix)\r\n 2042 :obj:`Tuple(str)`: Paths to the files saved.\r\n 2043 \"\"\"\r\n-> 2044 raise NotImplementedError\r\n 2045 \r\n 2046 def tokenize(self, text: str, pair: Optional[str] = None, add_special_tokens: bool = False, **kwargs) -> List[str]:\r\n\r\nNotImplementedError: \r\n````\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '322037e842e5e89080918c824998c17722df6f19', 'files': [{'path': 'src/transformers/tokenization_utils_fast.py', 'Loc': {\"('PreTrainedTokenizerFast', '_save_pretrained', 505)\": {'mod': [509]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/tokenization_utils_fast.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "77a257fc210a56f1fd0d75166ecd654cf58111f3", "iss_html_url": "https://github.com/huggingface/transformers/issues/8403", "iss_label": "", "title": "[s2s finetune] huge increase in memory demands with --fp16 native amp", "body": "While working on https://github.com/huggingface/transformers/issues/8353 I discovered that `--fp16` causes a 10x+ increase in gpu memory demands.\r\n\r\ne.g. I can run bs=12 w/o `--fp16` \r\n\r\n```\r\ncd examples/seq2seq\r\nexport BS=12; rm -rf distilbart-cnn-12-6; python finetune.py --learning_rate=3e-5 --gpus 1 \\\r\n--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \\\r\n--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \\\r\n--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \\\r\n--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \\\r\n--warmup_steps 500 --output_dir distilbart-cnn-12-6\r\n\r\n```\r\nBut if I add:\r\n```\r\n--fp16\r\n```\r\n\r\n(w/ or w/o `--fp16_opt_level O1`)\r\n\r\nI get OOM even with bs=1 on a 8GB card and it barely manages on a 24GB card - I think the increase in memory demand is more than 10x.\r\n\r\nThe OOM either right away when it does the sanity check step, or after just 10-20 batches - so within a few secs\r\n\r\nThis is with pytorch-1.6. Same goes for pytorch-1.7 and 1.8-nightly.\r\n\r\nI wasn't able to test `--fp16` with pytorch-1.5, since I can't build apex on ubuntu-20.04. Without `--fp16` pytorch-1.5 works the same as pytorch-1.6 gpu memory-wise.\r\n\r\nI tested with pytorch-1.5 + apex and there is no problem there. Memory consumption is about half.\r\n\r\nHere is the table of the batch sizes that fit into a 8gb rtx-1070 (bigger BS leads to an instant OOM):\r\n\r\nbs | version\r\n---|--------\r\n12 | pt15\r\n20 | pt15+fp16\r\n12 | pt16\r\n1 | pt16+fp16\r\n\r\n\r\n\r\nIf you'd like to reproduce the problem here are the full steps:\r\n\r\n```\r\n# prep library\r\ngit clone https://github.com/huggingface/transformers\r\ncd transformers\r\npip install -e .[dev]\r\npip install -r examples/requirements.txt\r\ncd examples/seq2seq\r\n\r\n# prep data\r\nwget https://cdn-datasets.huggingface.co/summarization/cnn_dm_v2.tgz\r\ntar -xzvf cnn_dm_v2.tgz # empty lines removed\r\nmv cnn_cln cnn_dm\r\n\r\n# run\r\nexport BS=12; \r\nrm -rf distilbart-cnn-12-6\r\npython finetune.py --learning_rate=3e-5 --gpus 1 \\\r\n--do_train --do_predict --val_check_interval 0.25 --n_val 500 --num_train_epochs 2 --freeze_encoder \\\r\n--freeze_embeds --data_dir cnn_dm --max_target_length 142 --val_max_target_length=142 \\\r\n--train_batch_size=$BS --eval_batch_size=$BS --gradient_accumulation_steps 1 \\\r\n--model_name_or_path sshleifer/student_cnn_12_6 --tokenizer_name facebook/bart-large \\\r\n--warmup_steps 500 --output_dir distilbart-cnn-12-6 \r\n```\r\n\r\nThis issue is to track the problem and hopefully finding a solution.\r\n\r\n@sshleifer ", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/pytorch/pytorch/commit/57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57", "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "pytorch", "pro": "pytorch", "path": ["{'base_commit': '57bffc3a8e4fee0cce31e1ff1f662ccf7b16db57', 'files': [{'path': 'aten/src/ATen/autocast_mode.cpp', 'status': 'modified', 'Loc': {\"(None, 'cached_cast', 67)\": {'mod': [71]}}}, {'path': 'test/test_cuda.py', 'status': 'modified', 'Loc': {\"('TestCuda', None, 92)\": {'add': [2708]}}}]}"]}], "analysis": {"iss_type": "2", "iss_reason": "1", "loc_way": "commit", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["aten/src/ATen/autocast_mode.cpp"], "doc": [], "test": ["test/test_cuda.py"], "config": [], "asset": ["pytorch"]}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "1a688709b34b10bd372e3e0860c8d39d170ebf53", "iss_html_url": "https://github.com/huggingface/transformers/issues/17201", "iss_label": "", "title": "a memory leak in qqp prediction using bart", "body": "### System Info\n\n```shell\n- `transformers` version: 4.19.0.dev0\r\n- Platform: Linux-5.11.0-43-generic-x86_64-with-glibc2.17\r\n- Python version: 3.8.10\r\n- Huggingface_hub version: 0.4.0\r\n- PyTorch version (GPU?): 1.10.1 (True)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Flax version (CPU?/GPU?/TPU?): not installed (NA)\r\n- Jax version: not installed\r\n- JaxLib version: not installed\r\n- Using GPU in script?: Yes\r\n- Using distributed or parallel set-up in script?: No\n```\n\n\n### Who can help?\n\n@sgugger\n\n### Information\n\n- [X] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nI met the same issue #11011. If not using `--eval_accumulation_steps`, it caused CUDA out of memory. If using it, it caused out of RAM and killed by system.\r\n\r\nI only did prediction on GLUE QQP dataset using bart without fine-tuning. Considering QQP having a large test set (300k), the prediction got slower and slower, and finally got out of memory.\r\n\r\nThis is the script to reproduce:\r\n```\r\nCUDA_VISIBLE_DEVICES=0 python run_glue.py --model_name_or_path facebook/bart-large --task_name qqp --output_dir bart-large_qqp --eval_accumulation_steps 100 --do_predict --per_device_eval_batch_size 24\r\n```\n\n### Expected behavior\n\n```shell\nPrediction without out memory.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1a688709b34b10bd372e3e0860c8d39d170ebf53', 'files': [{'path': 'src/transformers/trainer.py', 'Loc': {\"('Trainer', 'evaluation_loop', 2549)\": {'mod': [2635]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2\nOr\n5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/trainer.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5", "iss_html_url": "https://github.com/huggingface/transformers/issues/28435", "iss_label": "", "title": "Skip some weights for load_in_8bit and keep them as fp16/32?", "body": "### Feature request\r\n\r\nHello,\r\n\r\nI am looking for a way to load a checkpoint where I only load some of the weights in 8 bit and keep others in 16/32 bit.\r\n\r\n### Motivation\r\n\r\nMy motivation is for vision-language models like Llava or BLIP2 where I want to load the LLM part in 8 bit but the image encoder should stay in 16 bit because I notice performance degradations with CLIP in 8 bit and also want to be able to train this part without LoRA.\r\n\r\nAs far as I can see in the documentation, issues and with Google (both here and for bitsandbytes), there is currently no way to do this.\r\n\r\n### Your contribution\r\n\r\nI can in theory help implement something like this but I don't know where and how in the code this should be done.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'cef2e40e0f8eaad13b8d32817a48fdddc32eb2a5', 'files': [{'path': 'src/transformers/modeling_utils.py', 'Loc': {\"('PreTrainedModel', 'from_pretrained', 2528)\": {'mod': [3524]}}, 'status': 'modified'}, {'path': 'src/transformers/utils/quantization_config.py', 'Loc': {\"('BitsAndBytesConfig', None, 151)\": {'mod': [176]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/modeling_utils.py", "src/transformers/utils/quantization_config.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "705ca7f21b2b557e0cfd5d0853b297fa53489d20", "iss_html_url": "https://github.com/huggingface/transformers/issues/14938", "iss_label": "", "title": "Question: Object of type EncoderDecoderConfig is not JSON serializable", "body": "Hi.\r\nAn error occurred when I used Trainer to train and save EncoderDecoderModel.\r\n\r\n```python\r\n File \"/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py\", line 482, in <module>\r\n run(model_args, data_args, training_args)\r\n File \"/home/jwli/ljw/study/hotpotqa/roberta_seq2seq/roberta_for_seq2seq.py\", line 465, in run\r\n train_result = trainer.train(resume_from_checkpoint=checkpoint)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1391, in train\r\n self._maybe_log_save_evaluate(tr_loss, model, trial, epoch, ignore_keys_for_eval)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1495, in _maybe_log_save_evaluate\r\n self._save_checkpoint(model, trial, metrics=metrics)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1557, in _save_checkpoint\r\n self.save_model(output_dir)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 1961, in save_model\r\n self._save(output_dir)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/trainer.py\", line 2009, in _save\r\n self.model.save_pretrained(output_dir, state_dict=state_dict)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/modeling_utils.py\", line 1053, in save_pretrained\r\n model_to_save.config.save_pretrained(save_directory)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 416, in save_pretrained\r\n self.to_json_file(output_config_file, use_diff=True)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 739, in to_json_file\r\n writer.write(self.to_json_string(use_diff=use_diff))\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/site-packages/transformers/configuration_utils.py\", line 725, in to_json_string\r\n return json.dumps(config_dict, indent=2, sort_keys=True) + \"\\n\"\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/__init__.py\", line 238, in dumps\r\n **kw).encode(obj)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 201, in encode\r\n chunks = list(chunks)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 431, in _iterencode\r\n yield from _iterencode_dict(o, _current_indent_level)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 405, in _iterencode_dict\r\n yield from chunks\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 438, in _iterencode\r\n o = _default(o)\r\n File \"/home/jwli/anaconda3/envs/study/lib/python3.7/json/encoder.py\", line 179, in default\r\n raise TypeError(f'Object of type {o.__class__.__name__} '\r\nTypeError: Object of type EncoderDecoderConfig is not JSON serializable\r\n```\r\nMy model and Config define the following code. \r\n```python\r\n tokenizer = RobertaTokenizerFast.from_pretrained(model_args.tokenizer_name)\r\n encoder_config = RobertaConfig.from_pretrained(model_args.encoder_model_name_or_path)\r\n decoder_config = RobertaConfig.from_pretrained(model_args.decoder_model_name_or_path)\r\n encoder_decoder_config = EncoderDecoderConfig.from_encoder_decoder_configs(encoder_config, decoder_config)\r\n model = RobertaForSeq2Seq.from_encoder_decoder_pretrained(model_args.encoder_model_name_or_path,\r\n model_args.decoder_model_name_or_path,\r\n config=encoder_decoder_config, tie_encoder_decoder=True)\r\n model.config.decoder_start_token_id = tokenizer.bos_token_id\r\n model.config.eos_token_id = tokenizer.eos_token_id\r\n model.config.max_length = 64\r\n model.config.early_stopping = True\r\n model.config.no_repeat_ngram_size = 3\r\n model.config.length_penalty = 2.0\r\n model.config.num_beams = 4\r\n model.config.pad_token_id = tokenizer.pad_token_id\r\n```\r\nThis error occurred because EncoderDecoderConfig cannot be converted to json format. But I don't know how to modify it.\r\n```python\r\nERROR OCCURRED:\r\n\r\n if use_diff is True:\r\n config_dict = self.to_diff_dict()\r\n else:\r\n config_dict = self.to_dict()\r\n return json.dumps(config_dict, indent=2, sort_keys=True) + \"\\n\"\r\n```\r\n\r\nI look forward to your help! Thanks!\r\n @jplu @patrickvonplaten ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [46, 47], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "45d21502f0b67eb8a5ad244d469dcc0dfb7517a7", "iss_html_url": "https://github.com/huggingface/transformers/issues/653", "iss_label": "", "title": "Different Results from version 0.4.0 to version 0.5.0", "body": "Hi, I found the results after training is different from version 0.4.0 to version 0.5.0. I have fixed all initialization to reproduce the results. And I also test version 0.2.0 and 0.3.0, the results are the same to version 0.4.0, but from version 0.5.0 +, the results is different. I am wondering that have you trained a new model, so the weights changed? ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '45d21502f0b67eb8a5ad244d469dcc0dfb7517a7', 'files': [{'path': 'pytorch_pretrained_bert/modeling.py', 'Loc': {\"('BertPreTrainedModel', 'init_bert_weights', 515)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["pytorch_pretrained_bert/modeling.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885", "iss_html_url": "https://github.com/huggingface/transformers/issues/10202", "iss_label": "", "title": "Fast Tokenizers instantiated via vocab/merge files do not respect skip_special_tokens=True", "body": "## Environment info\r\n- `transformers` version: 4.3.2\r\n- Platform: macOS-11.2.1-x86_64-i386-64bit\r\n- Python version: 3.9.1\r\n- PyTorch version (GPU?): 1.7.1 (False)\r\n- Tensorflow version (GPU?): not installed (NA)\r\n- Using GPU in script?: No\r\n- Using distributed or parallel set-up in script?: No\r\n\r\n## Information\r\n\r\nSee title; this issue does not reproduce with slow tokenizers. Does not reproduce with serialized tokenizers.\r\n\r\nFound while investigating https://github.com/minimaxir/aitextgen/issues/88\r\n\r\n## To reproduce\r\n\r\nUsing [gpt2_merges.txt](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_merges.txt) and [gpt2_vocab.json](https://github.com/minimaxir/aitextgen/blob/master/aitextgen/static/gpt2_vocab.json) as linked:\r\n\r\n```py\r\nfrom transformers import AutoModelForCausalLM, GPT2Tokenizer, GPT2TokenizerFast\r\n\r\nmodel = AutoModelForCausalLM.from_pretrained(\"distilgpt2\")\r\n\r\noutputs = model.generate(max_length=40)\r\n\r\n# tensor([[50256, 383, 471, 13, 50, 13, 2732, 286, 4796, 468,\r\n# 587, 10240, 262, 1918, 286, 257, 1966, 5349, 5797, 508,\r\n# 373, 2823, 290, 2923, 416, 257, 23128, 287, 262, 471,\r\n# 13, 50, 13, 13241, 319, 3583, 13, 198, 198, 198]])\r\n\r\ntokenizer_fast = GPT2TokenizerFast(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_fast.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# '<|endoftext|> The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\ntokenizer_slow = GPT2Tokenizer(vocab_file=\"gpt2_vocab.json\", merges_file=\"gpt2_merges.txt\")\r\ntokenizer_slow.decode(outputs[0], skip_special_tokens=True)\r\n\r\n# ' The U.S. Department of Justice has been investigating the death of a former FBI agent who was shot and killed by a gunman in the U.S. Capitol on Wednesday.\\n\\n\\n'\r\n\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1c8c2d9ab34b8c8d326db9e0608f8e54cfccb885', 'files': [{'path': 'src/transformers/tokenization_utils_base.py', 'Loc': {\"('SpecialTokensMixin', 'add_special_tokens', 900)\": {'mod': []}}, 'status': 'modified'}, {'Loc': [33], 'path': None}]}", "own_code_loc": [{"Loc": [33], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "Cment\u6307\u51fa\u7528\u6237\u4ee3\u7801\u95ee\u9898\uff0c\u7ed9\u51fa\u9700\u8981\u4f7f\u7528\u7684API\n\u81ea\u5df1\u4ee3\u7801\u7684\u95ee\u9898 \u53e6\u4e00\u4e2aissue\u4e2d\u6307\u51facmit\nI think this is happening because when you load it from the vocab and merge files, it doesn't know <|endoftext|> is a special token. For the skip_special_tokens to work, I believe it would be necessary to add them to the tokenizer:\ntokenizer_fast.add_special_tokens({\n \"additional_special_tokens\": \"<|endoftext|>\"\n})\n", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/tokenization_utils_base.py", null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "huggingface", "repo_name": "transformers", "base_commit": "5bcbdff15922b1d0eeb035879630ca61c292122a", "iss_html_url": "https://github.com/huggingface/transformers/issues/32661", "iss_label": "bug", "title": "RoBERTa config defaults are inconsistent with fairseq implementation", "body": "### System Info\n\n python 3.12, transformers 4.14, latest mac os\n\n### Who can help?\n\n_No response_\n\n### Information\n\n- [ ] The official example scripts\n- [ ] My own modified scripts\n\n### Tasks\n\n- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)\n- [ ] My own task or dataset (give details below)\n\n### Reproduction\n\nfrom transformers import RobertaConfig\r\nmy_config = RobertaConfig()\r\nroberta_config = RobertaConfig.from_pretrained(\"roberta-base\")\r\n\r\nassert (\r\n my_config.max_position_embeddings == roberta_config.max_position_embeddings\r\n), \"%d %d\" % (my_config.max_position_embeddings, roberta_config.max_position_embeddings)\n\n### Expected behavior\n\nThe config defaults should correspond the the base model?\r\n\r\nThis is an implementation detail, but it did send me on a debugging spree as it hid as a sticky CUDA assertion error.\r\n```Assertion `srcIndex < srcSelectDimSize` failed```\r\n\r\nThe problem is that by default if you create the position_ids yourself or if you let transformers roberta_modelling take care of it (it also does it the way fairseq implemented it), it will create indeces that are out of bounds with the default configuration as everything is shifted by pad_token_id.\r\n\r\nThis is more of a heads up. Do transformers generally provide defaults aligned with the original models, or are the defaults here meant to be agnostic of that?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5bcbdff15922b1d0eeb035879630ca61c292122a', 'files': [{'path': 'src/transformers/models/roberta/configuration_roberta.py', 'Loc': {\"('RobertaConfig', None, 29)\": {'mod': [59]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/transformers/models/roberta/configuration_roberta.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f0df3144d68ed288f5ccce0c34d3939f8462ba98", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1345", "iss_label": "", "title": "Not able to run any MetaGPT examples", "body": "Referred Issue #1322 , but not able to resolve the issue. I added azure based api endpoint and api key in config2.yaml\r\n\r\n\r\n\u2502 105 \u2502 \u2502 typer.echo(\"Missing argument 'IDEA'. Run 'metagpt --help' for more information.\" \u2502\r\n\u2502 106 \u2502 \u2502 raise typer.Exit() \u2502\r\n\u2502 107 \u2502 \u2502\r\n\u2502 \u2771 108 \u2502 return generate_repo( \u2502\r\n\u2502 109 \u2502 \u2502 idea, \u2502\r\n\u2502 110 \u2502 \u2502 investment, \u2502\r\n\u2502 111 \u2502 \u2502 n_round, \u2502\r\n\u2502 \u2502\r\n\\metagpt\\software_company.py:30 in generate_repo \u2502\r\n\u2502 \u2502\r\n\u2502 27 \u2502 recover_path=None, \u2502\r\n\u2502 28 ) -> ProjectRepo: \u2502\r\n\u2502 29 \u2502 \"\"\"Run the startup logic. Can be called from CLI or other Python scripts.\"\"\" \u2502\r\n\u2502 \u2771 30 \u2502 from metagpt.config2 import config \u2502\r\n\u2502 31 \u2502 from metagpt.context import Context \u2502\r\n\u2502 32 \u2502 from metagpt.roles import ( \u2502\r\n\u2502 33 \u2502 \u2502 Architect, \u2502\r\n\u2502 \u2502\r\n\\new_meta_env\\Lib\\site-packages\\metagpt-0.8.1-py3.11.egg\\metagpt\\ \u2502\r\n\u2502 config2.py:164 in <module> \u2502\r\n\u2502 \u2502\r\n\u2502 161 \u2502 return result \u2502\r\n\u2502 162 \u2502\r\n\u2502 163 \u2502\r\n\u2502 \u2771 164 config = Config.default() \u2502\r\n\\new_meta_env\\Lib\\site-packages\\metagpt-0.8.1-py3.11.egg\\metagpt\\ \u2502\r\n\u2502 config2.py:106 in default \u2502\r\n\u2502 \u2502\r\n\u2502 103 \u2502 \u2502 dicts = [dict(os.environ)] \u2502\r\n\u2502 104 \u2502 \u2502 dicts += [Config.read_yaml(path) for path in default_config_paths] \u2502\r\n\u2502 105 \u2502 \u2502 final = merge_dict(dicts) \u2502\r\n\u2502 \u2771 106 \u2502 \u2502 return Config(**final) \u2502\r\n\u2502 107 \u2502 \u2502\r\n\u2502 108 \u2502 @classmethod \u2502\r\n\u2502 109 \u2502 def from_llm_config(cls, llm_config: dict): \u2502\r\n\u2502 \u2502\r\n\\new_meta_env\\Lib\\site-packages\\pydantic\\main.py:176 in __init__ \u2502\r\n\u2502 \u2502\r\n\u2502 173 \u2502 \u2502 \"\"\" \u2502\r\n\u2502 174 \u2502 \u2502 # `__tracebackhide__` tells pytest and some other tools to omit this function fr \u2502\r\n\u2502 175 \u2502 \u2502 __tracebackhide__ = True \u2502\r\n\u2502 \u2771 176 \u2502 \u2502 self.__pydantic_validator__.validate_python(data, self_instance=self) \u2502\r\n\u2502 177 \u2502 \u2502\r\n\u2502 178 \u2502 # The following line sets a flag that we use to determine when `__init__` gets overr \u2502\r\n\u2502 179 \u2502 __init__.__pydantic_base_init__ = True # pyright: ignore[reportFunctionMemberAccess \u2502\r\n\u2570\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u256f\r\nValidationError: 1 validation error for Config\r\nllm\r\n Field required [type=missing, input_value={'ALLUSERSPROFILE': 'C:\\\\..._INIT_AT_FORK': 'FALSE'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.7/v/missing", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f0df3144d68ed288f5ccce0c34d3939f8462ba98', 'files': [{'path': 'config/config2.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "e43aaec9322054f4dec92f44627533816588663b", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/576", "iss_label": "", "title": "\u8bf7\u95eemetagpt\u662f\u5426\u652f\u6301\u5411\u91cf\u6570\u636e\uff0c\u6784\u5efa\u81ea\u5df1\u7684\u77e5\u8bc6\u5e93", "body": "\u8bf7\u95eemetagpt\u662f\u5426\u652f\u6301\u5411\u91cf\u6570\u636e\uff0c\u6784\u5efa\u81ea\u5df1\u7684\u77e5\u8bc6\u5e93", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e43aaec9322054f4dec92f44627533816588663b', 'files': [{'path': '/metagpt/document_store', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["/metagpt/document_store"], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "be56351e000a0f08562820fb04f6fdbe34d9e655", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/205", "iss_label": "", "title": "Rate Limited error", "body": "openai.error.RateLimitError: Rate limit reached for 10KTPM-200RPM in organization org-fK5bb25UFhVbebfBtfCejGc4 on tokens per min. Limit: 10000 / min. Please try again in 6ms. Contact us through our help center at help.openai.com if you continue to have issues.\r\n\r\nMaybe a way to resume so all the runtime isn't just lost?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'be56351e000a0f08562820fb04f6fdbe34d9e655', 'files': [{'path': 'metagpt/provider/openai_api.py', 'Loc': {\"('OpenAIGPTAPI', '_achat_completion_stream', 150)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/provider/openai_api.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "fd7feb57fac8d37509b1325cad502d2f65d59956", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1553", "iss_label": "inactive", "title": "ValueError: Creator not registered for key: LLMType.OLLAMA", "body": "**Bug description**\r\n<!-- Clearly and directly describe the current bug -->\r\nI using ***MetaGPT ver 0.8.1*** but when use RAG with method **SimpleEngine.from_docs** have error ***ValueError: Creator not registered for key: LLMType.OLLAMA***\r\n\r\n<!-- **Bug solved method** -->\r\n<!-- If you solved the bug, describe the idea or process to solve the current bug. Of course, you can also paste the URL address of your Pull Request. -->\r\n<!-- If not, provide more auxiliary information to facilitate our further positioning and investigation -->\r\n\r\n**Environment information**\r\n<!-- Environment\uff1aSystem version (like ubuntu 22.04), Python version (conda python 3.7), LLM type and model (OpenAI gpt-4-1106-preview) -->\r\n\r\n- LLM type and model name: ollama and model: hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\r\n- System version:\r\n- Python version: 3.10\r\n- MetaGPT version or branch: 0.8.1\r\n\r\n<!-- Dependent packagess\uff1athe packages version cause the bug(like `pydantic 1.10.8`), installation method\uff08like `pip install metagpt` or `pip install from source` or `run in docker`\uff09 -->\r\n\r\n- packages version:\r\n- installation method: \r\n\r\n**Screenshots or logs**\r\n<!-- Screenshots or logs of the bug can help us understand the problem more quickly -->\r\n***config2.yaml***\r\nembedding:\r\n api_type: \"ollama\"\r\n model: \"hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\"\r\n base_url: \"http://127.0.0.1:11434/api\"\r\n\r\nllm:\r\n api_type: \"ollama\"\r\n model: \"hf.co/hugging-quants/Llama-3.2-3B-Instruct-Q8_0-GGUF\"\r\n base_url: \"http://127.0.0.1:11434/api\"\r\n\r\n***Error Response***\r\n[/usr/local/lib/python3.10/dist-packages/metagpt/rag/factories/base.py](https://localhost:8080/#) in get_instance(self, key, **kwargs)\r\n 27 return creator(**kwargs)\r\n 28 \r\n---> 29 raise ValueError(f\"Creator not registered for key: {key}\")\r\n 30 \r\n 31 \r\n\r\nValueError: Creator not registered for key: LLMType.OLLAMA\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"path": "config/config2.yaml", "Loc": [28]}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "df8d1124c68b62bb98c71b6071abf5efe6293dba", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/15", "iss_label": "", "title": "\u8bf7\u95ee\u5982\u4f55\u914d\u7f6e\u4f7f\u7528Azure\u4e0a\u7684api\uff1f", "body": "\u4f60\u597d\uff0c \r\n\u6211\u770b\u5230\u6587\u6863\u4e2d\u9700\u8981\u914d\u7f6eopenAI\u7684key\uff0c\u4f46\u662f\u6211\u6ce8\u610f\u5230\u5728provider\u4e2d\u6709azure_api\u7684\u76f8\u5173\u6587\u4ef6,\r\n\u8bf7\u95ee\u662f\u5426\u5728\u54ea\u4e2a\u5730\u65b9\u53ef\u4ee5\u914d\u7f6e\u8ba9\u4ed6\u4f7f\u7528azure\u63d0\u4f9b\u7684\u670d\u52a1\uff1f", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'df8d1124c68b62bb98c71b6071abf5efe6293dba', 'files': [{'path': 'config/config.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config.yaml"], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "dfa33fcdaade1e4f8019835bf065d372d76724ae", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/924", "iss_label": "", "title": "GLM4\u4e00\u76f4\u62a5\u9519", "body": "2024-02-22 16:50:26.666 | ERROR | metagpt.utils.common:log_it:476 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 80.109(s), this was the 5th time calling it. exp: 1 validation error for PM_NODE_AN\r\n Value error, Missing fields: {'Full API spec', 'Required Python packages', 'Required Other language third-party packages'} [type=value_error, input_value={'Required JavaScript pac...ation and development.'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.5/v/value_error", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dfa33fcdaade1e4f8019835bf065d372d76724ae', 'files': [{'path': 'config/config2.yaml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["config/config2.yaml"], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "80a189ad4a1546f8c1a9dbe00c42725868c35e5e", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/135", "iss_label": "", "title": "failed to launch chromium browser process errors", "body": "get errors on launch of browser process; below is the error from terminal which happens for all browser processes trying to launch.\r\n\r\n```\r\nINFO | metagpt.utils.mermaid:mermaid_to_file:38 - Generating /Users/lopezdp/DevOps/Ai_MetaGPT/workspace/test_app/resources/competitive_analysis.pdf..\r\n\r\nError: Failed to launch the browser process! spawn /usr/bin/chromium ENOENT\r\n\r\n\r\nTROUBLESHOOTING: https://pptr.dev/troubleshooting\r\n\r\n at ChildProcess.onClose (file:///Users/lopezdp/DevOps/Ai_MetaGPT/node_modules/@puppeteer/browsers/lib/esm/launch.js:253:24)\r\n at ChildProcess.emit (node:events:513:28)\r\n at Process.ChildProcess._handle.onexit (node:internal/child_process:291:12)\r\n at onErrorNT (node:internal/child_process:485:16)\r\n at processTicksAndRejections (node:internal/process/task_queues:83:21)\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '80a189ad4a1546f8c1a9dbe00c42725868c35e5e', 'files': [{'path': 'config/puppeteer-config.json', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": ["config/puppeteer-config.json"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/1115", "iss_label": "", "title": "The following error appears on every run", "body": "![image](https://github.com/geekan/MetaGPT/assets/115678682/1fb58e0b-47a7-4e1f-a7b7-924ea9adedb0)\r\n\r\n2024-03-27 11:15:59.019 | ERROR | metagpt.utils.common:wrapper:631 - Exception occurs, start to serialize the project, exp:\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\r\n result = fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\repair_llm_raw_output.py\", line 296, in retry_parse_json_text\r\n parsed_data = CustomDecoder(strict=False).decode(output)\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 425, in _aask_v1\r\n parsed_data = llm_output_postprocess(\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31d30 state=finished raised JSONDecodeError>]\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 640, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 550, in run\r\n rsp = await self.react()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31160 state=finished raised RetryError>]\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 626, in wrapper\r\n result = await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\team.py\", line 134, in run\r\n await self.env.run()\r\nException: Traceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 382, in __call__\r\n result = fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\repair_llm_raw_output.py\", line 296, in retry_parse_json_text\r\n parsed_data = CustomDecoder(strict=False).decode(output)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 297, in decode\r\n return super().decode(s)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\json\\decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\json\\decoder.py\", line 353, in raw_decode\r\n obj, end = self.scan_once(s, idx)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 65, in scan_once\r\n return _scan_once(string, idx)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 36, in _scan_once\r\n return parse_object((string, idx + 1), strict, _scan_once, object_hook, object_pairs_hook, memo)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 164, in JSONObject\r\n value, end = scan_once(s, end)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 34, in _scan_once\r\n return parse_string(string, idx + 1, strict, delimiter=nextchar)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\custom_decoder.py\", line 227, in py_scanstring\r\n raise JSONDecodeError(\"Unterminated string starting at\", s, begin)\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 13 column 25 (char 3485)\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 50, in __call__\r\n result = await fn(*args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 425, in _aask_v1\r\n parsed_data = llm_output_postprocess(\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\llm_output_postprocess.py\", line 19, in llm_output_postprocess\r\n result = postprocess_plugin.run(output=output, schema=schema, req_key=req_key)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 68, in run\r\n new_output = self.run_repair_llm_output(output=output, schema=schema, req_key=req_key)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 32, in run_repair_llm_output\r\n parsed_data = self.run_retry_parse_json_text(content)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\provider\\postprocess\\base_postprocess_plugin.py\", line 47, in run_retry_parse_json_text\r\n parsed_data = retry_parse_json_text(output=content) # should use output=content\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 289, in wrapped_f\r\n return self(f, *args, **kw)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 379, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31d30 state=finished raised JSONDecodeError>]\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\utils\\common.py\", line 640, in wrapper\r\n return await func(self, *args, **kwargs)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 550, in run\r\n rsp = await self.react()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 517, in react\r\n rsp = await self._react()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 463, in _react\r\n rsp = await self._act()\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\roles\\role.py\", line 392, in _act\r\n response = await self.rc.todo.run(self.rc.history)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 58, in run\r\n doc = await self._update_system_design(filename=filename)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 86, in _update_system_design\r\n system_design = await self._new_system_design(context=prd.content)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\design_api.py\", line 73, in _new_system_design\r\n node = await DESIGN_API_NODE.fill(context=context, llm=self.llm)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 505, in fill\r\n return await self.simple_fill(schema=schema, mode=mode, images=images, timeout=timeout, exclude=exclude)\r\n File \"d:\\\u4e0b\u8f7d\\metagpt-main\\metagpt\\actions\\action_node.py\", line 457, in simple_fill\r\n content, scontent = await self._aask_v1(\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 88, in async_wrapped\r\n return await fn(*args, **kwargs)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\_asyncio.py\", line 47, in __call__\r\n do = self.iter(retry_state=retry_state)\r\n File \"D:\\andconda\\envs\\metagpt\\lib\\site-packages\\tenacity\\__init__.py\", line 326, in iter\r\n raise retry_exc from fut.exception()\r\ntenacity.RetryError: RetryError[<Future at 0x1f1a7f31160 state=finished raised RetryError>]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8d98ce34e54eb6250f1f2cf60f5d4dd66d462a5d', 'files': [{'path': 'metagpt/strategy/planner.py', 'Loc': {\"('Planner', 'update_plan', 68)\": {'mod': [75]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["metagpt/strategy/planner.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "bdf9d224b5a05228897553a29214adc074fbc465", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/754", "iss_label": "", "title": "SubscriptionRunner", "body": "import asyncio\r\nfrom metagpt.subscription import SubscriptionRunner\r\nfrom metagpt.roles import Searcher\r\nfrom metagpt.schema import Message\r\n\r\nasync def trigger():\r\n while True:\r\n yield Message(\"the latest news about OpenAI\")\r\n await asyncio.sleep(1)\r\n\r\n\r\nasync def callback(msg: Message):\r\n print(msg.content)\r\n\r\n\r\n# async def main():\r\n# aa = trigger()\r\n# async for i in aa:\r\n# await callback(i)\r\nasync def main():\r\n pd = SubscriptionRunner()\r\n await pd.subscribe(Searcher(), trigger(), callback)\r\n await pd.run()\r\n\r\nasyncio.run(main())\r\n\u5728\u521b\u5efaRunner\u65f6\u5019\u62a5\u9519\uff0c0.6.3\u7248\u672c\r\nTraceback (most recent call last):\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 44, in <module>\r\n asyncio.run(main())\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 190, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\uweih034\\.conda\\envs\\mp\\Lib\\asyncio\\runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\asyncio\\base_events.py\", line 653, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"e:\\tmp\\metatest\\OSSWatcher .py\", line 40, in main\r\n pd = SubscriptionRunner()\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\main.py\", line 164, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\888888\\.conda\\envs\\mp\\Lib\\site-packages\\pydantic\\_internal\\_mock_val_ser.py\", line 47, in __getattr__\r\n raise PydanticUserError(self._error_message, code=self._code)\r\npydantic.errors.PydanticUserError: `SubscriptionRunner` is not fully defined; you should define `Environment`, then call `SubscriptionRunner.model_rebuild()`.\r\n\r\nFor further information visit https://errors.pydantic.dev/2.5/u/class-not-fully-defined", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'bdf9d224b5a05228897553a29214adc074fbc465', 'files': [{'path': 'metagpt/environment.py', 'Loc': {\"('Environment', None, 27)\": {'mod': []}}, 'status': 'modified'}, {'Loc': [21], 'path': None}]}", "own_code_loc": [{"Loc": [21], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null, "metagpt/environment.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "geekan", "repo_name": "MetaGPT", "base_commit": "f88fa9e2df09c28f867bda54ec24fa25b50be830", "iss_html_url": "https://github.com/geekan/MetaGPT/issues/178", "iss_label": "", "title": "Specify Directory of pdf documents as Knowledge Base", "body": "Hi, how can we specify any folder which includes pdf documents as a knowledge base and create a new Role of Document Controller to extract specific information from within the documents in KB?\r\n\r\nAny help would be highly appreciated\r\n\r\nThanks much appreciated", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f88fa9e2df09c28f867bda54ec24fa25b50be830', 'files': [{'path': 'metagpt/document_store', 'Loc': {}}, {'path': 'tests/metagpt/document_store', 'Loc': {}}, {'path': 'examples/search_kb.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/search_kb.py"], "doc": ["metagpt/document_store", "tests/metagpt/document_store"], "test": [], "config": [], "asset": []}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7e756b9db56677636e6920c1e6628d13e980aec7", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/6006", "iss_label": "bug", "title": "All custom components throw errors after update to latest version", "body": "### Bug Description\n\n```\n[01/29/25 00:15:00] ERROR 2025-01-29 00:15:00 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405\n <class 'pydantic._internal._model_construction.ModelMetaclass'> \n``` \n\n### Reproduction\n\n1. langflow updated to v1.1.2 from v1.1.1\n2. all previously created custom components throwing error:\n\n[01/29/25 00:24:09] ERROR 2025-01-29 00:24:09 - ERROR - chat - Error building vertices: Error serializing vertex build response: Unable to serialize unknown type: chat.py:405\n <class 'pydantic._internal._model_construction.ModelMetaclass'> \n\n### Expected behavior\n\nLangflow should build tool correctly, as on previous version. \n\nSimplified failing code:\n```python\nfrom langflow.custom import Component\nfrom langflow.io import Output\nfrom langflow.schema import Data\nfrom langflow.field_typing import Tool\nfrom langchain.tools import StructuredTool\nfrom pydantic import BaseModel, Field\n\nclass MinimalSchema(BaseModel):\n input_text: str = Field(..., description=\"Text Input\")\n\nclass SimpleToolComponentMinimalSchema(Component):\n display_name = \"Simple Tool Minimal Schema Test\"\n description = \"Component with StructuredTool and minimal schema\"\n outputs = [Output(display_name=\"Tool\", name=\"test_tool\", method=\"build_tool\")]\n\n class MinimalSchema(BaseModel): # Define inner schema\n input_text: str = Field(..., description=\"Text Input\")\n\n def build_tool(self) -> Tool:\n return StructuredTool.from_function( # Return directly - simplified\n name=\"minimal_tool\",\n description=\"Minimal tool for testing schema\",\n func=self.run_tool,\n args_schema=SimpleToolComponentMinimalSchema.MinimalSchema\n )\n\n def run_tool(self, input_text: str) -> str:\n return f\"Tool received: {input_text}\"\n``` \n\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nwsl Ubuntu latest\n\n### Langflow Version\n\n1.1.2\n\n### Python Version\n\n3.12\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [40], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "19818db68b507332be71f30dd90d16bf4c7d6f83", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3718", "iss_label": "enhancement", "title": "Add pgVector in the building instructions for the PostgreSQL Docker image", "body": "### Feature Request\r\n\r\nInclude the pgVector component with the Docker build instructions. This would provide the use with a fully functional PostgreSQL Vector DB, ready to be used inside LangFlow.\r\n\r\n### Motivation\r\n\r\nI am not a programmer, neither I do have proper knowledge of SQL, but I liked to play with some RAG ideas and LangFlow seems perfect. \r\nSo, after installing the Docker version for development of LangFlow, I noticed that the PostgreSQL server is missing the pgVector component, or at least that is what I understood from the error messages. \r\nPerhaps, it would be useful if the pgVector could be included in the Docker container, so having the user to just activate it on the SQL database. Anyway, I might be wrong, so in that case please forgive me.\r\n\r\n### Your Contribution\r\n\r\nAfter looking into the repository and searching around, with the help of AI (of course!), I found that the Docker instructions for the PostgreSQL server are defined inside the file \\docker\\cdk.Dockerfile (hope it's correct), and these might be the instructions to include pgVector:\r\n\r\n```\r\nFROM --platform=linux/amd64 python:3.10-slim\r\n\r\nWORKDIR /app\r\n\r\n# Install Poetry and build dependencies\r\nRUN apt-get update && apt-get install -y \\\r\n gcc \\\r\n g++ \\\r\n curl \\\r\n build-essential \\\r\n git \\\r\n postgresql-server-dev-all \\\r\n && rm -rf /var/lib/apt/lists/*\r\n\r\n# Install Poetry\r\nRUN curl -sSL https://install.python-poetry.org | python3 -\r\n\r\n# Add Poetry to PATH\r\nENV PATH=\"${PATH}:/root/.local/bin\"\r\n\r\n# Copy the pyproject.toml and poetry.lock files\r\nCOPY poetry.lock pyproject.toml ./\r\n\r\n# Copy the rest of the application codes\r\nCOPY ./ ./\r\n\r\n# Install dependencies\r\nRUN poetry config virtualenvs.create false && poetry install --no-interaction --no-ansi\r\n\r\n# Install pgvector extension\r\nRUN git clone https://github.com/pgvector/pgvector.git /tmp/pgvector && \\\r\n cd /tmp/pgvector && \\\r\n make && \\\r\n make install && \\\r\n rm -rf /tmp/pgvector\r\n\r\n# Install additional dependencies\r\nRUN poetry add botocore\r\nRUN poetry add pymysql\r\n\r\n# Command to run your application\r\nCMD [\"sh\", \"./container-cmd-cdk.sh\"]\r\n```\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '19818db68b507332be71f30dd90d16bf4c7d6f83', 'files': [{'path': 'docker_example/docker-compose.yml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nor\n4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": ["docker_example/docker-compose.yml"], "test": [], "config": [], "asset": []}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "2ddd7735129b0f35fd617f2634d35a3690b06630", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/4528", "iss_label": "bug", "title": "Can't access flow directly by link", "body": "### Bug Description\n\nWhen you try to access a flow using it's URL (ex. http://localhost:55650/flow/0b95342f-6ce4-43d0-9d60-c28bf66a3781), the page doesn't load and in the browser's console is shown the following message: ``Uncaught SyntaxError: Unexpected token '<' (at index-DK9323ab.js:1:1)``. I think that this problem is related to #1182 .\r\n\r\nNavegating through the main page to access this flow works fine. If I reload the page, it doesn't load as described before.\n\n### Reproduction\n\n1. Run the Docker image langflowui/langflow\r\n2. Open the langflow main page\r\n3. Creates a new flow\r\n4. Copy the flow link into a new tab or just reload the page\n\n### Expected behavior\n\nTo open the flow editor page.\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nDocker image (langflowai/langflow) running in K8s\n\n### Langflow Version\n\n1.0.19\n\n### Python Version\n\nNone\n\n### Screenshot\n\nInstead of loading the JS file, is loaded the HTML file as shown in the following picture:\r\n\r\n![image](https://github.com/user-attachments/assets/4a192a11-5389-497f-8898-fd0598684ca1)\r\n\r\nAll requests in this image loads the same HTML.\r\n\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2ddd7735129b0f35fd617f2634d35a3690b06630', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "ed53fcd3b042ecb5ed04c9c4562c459476bd6763", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/3896", "iss_label": "bug", "title": "redis.exceptions.ResponseError: unknown command 'module'", "body": "### Bug Description\n\nredis.exceptions.ResponseError: unknown command 'module'\r\n\r\nhttps://github.com/user-attachments/assets/32ea6046-d5f1-4d85-96b5-41d381776986\r\n\r\n\n\n### Reproduction\n\nAdd a redis click run error, see the video\n\n### Expected behavior\n\nResponseError: unknown command 'MODULE'\n\n### Who can help?\n\n_No response_\n\n### Operating System\n\nwindows \n\n### Langflow Version\n\n1.0.18\n\n### Python Version\n\n3.11\n\n### Screenshot\n\n_No response_\n\n### Flow File\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ed53fcd3b042ecb5ed04c9c4562c459476bd6763', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "7d400903644230a8842ce189ca904ea9f8048b07", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/1239", "iss_label": "bug", "title": "cannot import name 'DEFAULT_CONNECTION_STRING' in v0.6.3a5", "body": "\r\n\r\n```\r\n% git branch\r\n* (HEAD detached at v0.6.3a5)\r\n dev\r\n% cd docker_example \r\n% docker compose up\r\n\r\n[+] Running 1/0\r\n \u2714 Container docker_example-langflow-1 Created 0.0s \r\nAttaching to langflow-1\r\nlangflow-1 | Traceback (most recent call last):\r\nlangflow-1 | File \"/home/user/.local/bin/langflow\", line 5, in <module>\r\nlangflow-1 | from langflow.__main__ import main\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/__init__.py\", line 5, in <module>\r\nlangflow-1 | from langflow.processing.process import load_flow_from_json\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/processing/process.py\", line 10, in <module>\r\nlangflow-1 | from langflow.graph import Graph\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/__init__.py\", line 2, in <module>\r\nlangflow-1 | from langflow.graph.graph.base import Graph\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/graph/base.py\", line 7, in <module>\r\nlangflow-1 | from langflow.graph.graph.constants import lazy_load_vertex_dict\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/graph/constants.py\", line 1, in <module>\r\nlangflow-1 | from langflow.graph.vertex import types\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/vertex/types.py\", line 5, in <module>\r\nlangflow-1 | from langflow.graph.vertex.base import Vertex\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/graph/vertex/base.py\", line 9, in <module>\r\nlangflow-1 | from langflow.interface.initialize import loading\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/interface/initialize/loading.py\", line 17, in <module>\r\nlangflow-1 | from langflow.interface.custom_lists import CUSTOM_NODES\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/interface/custom_lists.py\", line 7, in <module>\r\nlangflow-1 | from langflow.interface.agents.custom import CUSTOM_AGENTS\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/interface/agents/__init__.py\", line 1, in <module>\r\nlangflow-1 | from langflow.interface.agents.base import AgentCreator\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/interface/agents/base.py\", line 5, in <module>\r\nlangflow-1 | from langflow.custom.customs import get_custom_nodes\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/custom/customs.py\", line 1, in <module>\r\nlangflow-1 | from langflow.template import frontend_node\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/template/frontend_node/__init__.py\", line 1, in <module>\r\nlangflow-1 | from langflow.template.frontend_node import (\r\nlangflow-1 | File \"/home/user/.local/lib/python3.10/site-packages/langflow/template/frontend_node/memories.py\", line 7, in <module>\r\nlangflow-1 | from langchain.memory.chat_message_histories.postgres import DEFAULT_CONNECTION_STRING\r\nlangflow-1 | ImportError: cannot import name 'DEFAULT_CONNECTION_STRING' from 'langchain.memory.chat_message_histories.postgres' (/home/user/.local/lib/python3.10/site-packages/langchain/memory/chat_message_histories/postgres.py)\r\nlangflow-1 exited with code 1\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7d400903644230a8842ce189ca904ea9f8048b07', 'files': [{'path': 'Version', 'Loc': {}}, {'path': 'Version', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["Version"]}} +{"organization": "langflow-ai", "repo_name": "langflow", "base_commit": "12a46b6936e23829d9956d4d5f1fa51faff76137", "iss_html_url": "https://github.com/langflow-ai/langflow/issues/965", "iss_label": "stale", "title": "Method for Dynamically Manipulating Parameters of a Custom Component", "body": "```python\r\nclass DynamicConfigCustomComponent(CustomComponent):\r\n def build_config(self, prev_selection=None):\r\n config = {\r\n \"param1\": {\"display_name\": \"Parameter 1\"},\r\n \"param2\": {\r\n \"display_name\": \"Parameter 2\",\r\n \"options\": [1, 2, 3],\r\n \"value\": 1,\r\n },\r\n }\r\n \r\n if prev_selection is not None:\r\n if prev_selection == 2:\r\n config[\"param3\"] = {\"display_name\": \"Parameter 3\", \"value\": \"hello\"}\r\n \r\n return config\r\n\r\n``` \r\nI want to dynamically change different values depending on the type of component that is input or connected when using a custom component, as shown in the attached code. For example, in Langflow's prompt template, when you change the text, the key value input into that component is dynamically displayed in the list. Is there any way to do this?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '12a46b6936e23829d9956d4d5f1fa51faff76137', 'files': [{'path': 'src/frontend/src/types/components/index.ts', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["src/frontend/src/types/components/index.ts"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "ad7cefa10c0647feee85114d58559fcf83ba6743", "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/1902", "iss_label": "setup", "title": "Error with 'python -m autogpt' command. Please set your OpenAI API key in .env or as an environment variable. You can get your key from https://beta.openai.com/account/api-keys", "body": "### Duplicates\n\n- [X] I have searched the existing issues\n\n### Steps to reproduce \ud83d\udd79\n\nInstalled the 'stable' version of the program\r\nI run 'python -m autogpt' command and comes up with an error.\r\n\r\n\r\n![Screenshot 2023-04-16 183147](https://user-images.githubusercontent.com/130889399/232320050-2b495403-55e9-4d43-b588-e53172eba533.jpg)\r\n\r\nI have paid Chat GPT and Open AI API accounts.\r\nFor Chat GPT I have access to version 4\r\nFor Open AI API I do not have access to version 4, I am on the version before this.\n\n### Current behavior \ud83d\ude2f\n\nError message ;Please set your OpenAI API key in .env or as an environment variable.\r\nYou can get your key from https://beta.openai.com/account/api-keys'\n\n### Expected behavior \ud83e\udd14\n\nShould load the program as to start commands\n\n### Your prompt \ud83d\udcdd\n\n```yaml\r\npython -m autogpt```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ad7cefa10c0647feee85114d58559fcf83ba6743', 'files': [{'path': 'run.sh', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1\n0", "info_type": "Other\n\u73af\u5883\u53d8\u91cf /script shell\u7b49"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["run.sh"]}} +{"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "90e6a55e378bc80352f01eb08122300b4d1a64ec", "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/2428", "iss_label": "function: logging", "title": "Add logging of user input of the role and goals", "body": "### Duplicates\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Summary \ud83d\udca1\r\n\r\nNow logs reflect only gpt's response but i don't really remember what exactly i input before. Please log it same as in the console. \r\nCurrent logging makes it a lot harder to debug\r\n\r\n### Examples \ud83c\udf08\r\n```\r\nAll packages are installed.\r\nWelcome back! Would you like me to return to being sc3?\r\nContinue with the last settings?\r\nName: sc3\r\nRole: warhammer 40k writer\r\nGoals: ['research the theme', 'do a 5000 symbols structurized explanation on wh40k lore', 'terminate']\r\nContinue (y/n): n\r\nWelcome to Auto-GPT! run with '--help' for more information.\r\nCreate an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.\r\nName your AI: For example, 'Entrepreneur-GPT'\r\nAI Name: da23eads\r\nda23eads here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\nda23eads is: wh 40k writer\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\nEnter nothing to load defaults, enter nothing when finished.\r\nGoal 1: research the theme\r\nGoal 2: do a plot esplanation on warhammer 40k universe\r\nGoal 3: terminate\r\nGoal 4:\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\n- Thinking...\r\n```\r\n\r\n### Motivation \ud83d\udd26\r\n\r\nmake the world better", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["ai_settings.yml"], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["ai_settings.yml"], "asset": []}} +{"organization": "Significant-Gravitas", "repo_name": "AutoGPT", "base_commit": "16b7e7a91e7b6c73ddf3e7193cea53f1b45671fa", "iss_html_url": "https://github.com/Significant-Gravitas/AutoGPT/issues/4218", "iss_label": "setup", "title": "AutoGPT v0.3.1 crashes immediately after task given", "body": "### Which Operating System are you using?\r\n\r\nWindows\r\n\r\n### Which version of Auto-GPT are you using?\r\n\r\nLatest Release v0.3.1\r\n\r\n### GPT-3 or GPT-4?\r\n\r\nGPT-3.5\r\n\r\n### Steps to reproduce \ud83d\udd79\r\n\r\nWelcome to Auto-GPT! run with '--help' for more information.\r\nCreate an AI-Assistant: input '--manual' to enter manual mode.\r\n Asking user via keyboard...\r\nI want Auto-GPT to: Search Big Mac prices in EU countries\r\nUnable to automatically generate AI Config based on user desire. Falling back to manual mode.\r\nCreate an AI-Assistant: Enter the name of your AI and its role below. Entering nothing will load defaults.\r\nName your AI: For example, 'Entrepreneur-GPT'\r\n Asking user via keyboard...\r\nAI Name: MacGPT\r\nMacGPT here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\n Asking user via keyboard...\r\nMacGPT is: Search for Big Mc prices in EU countries\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\n Enter nothing to load defaults, enter nothing when finished.\r\n Asking user via keyboard...\r\nGoal 1: Conduct a thorough and accurate search of BigMc prices across EU countries\r\n Asking user via keyboard...\r\nGoal 2: Provide price per each EU capital\r\n Asking user via keyboard...\r\nGoal 3: Ensure that the information provided is up-to-date and accurate\r\n Asking user via keyboard...\r\nGoal 4: Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n Asking user via keyboard...\r\nGoal 5: Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nEnter your budget for API calls: For example: $1.50\r\n Enter nothing to let the AI run without monetary limit\r\n Asking user via keyboard...\r\nBudget: $1\r\nMacGPT has been created with the following details:\r\nName: MacGPT\r\nRole: Search for Big Mc prices in EU countries\r\nGoals:\r\n- Conduct a thorough and accurate search of BigMc prices across EU countries\r\n- Provide price per each EU capital\r\n- Ensure that the information provided is up-to-date and accurate\r\n- Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n- Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\makkolev\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\agpt\\autogpt\\__main__.py\", line 5, in <module>\r\n autogpt.cli.main()\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1635, in invoke\r\n rv = super().invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"C:\\agpt\\autogpt\\cli.py\", line 90, in main\r\n run_auto_gpt(\r\n File \"C:\\agpt\\autogpt\\main.py\", line 186, in run_auto_gpt\r\n agent.start_interaction_loop()\r\n File \"C:\\agpt\\autogpt\\agent\\agent.py\", line 113, in start_interaction_loop\r\n assistant_reply = chat_with_ai(\r\n File \"C:\\agpt\\autogpt\\llm\\chat.py\", line 244, in chat_with_ai\r\n assistant_reply = create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\llm_utils.py\", line 166, in create_chat_completion\r\n response = api_manager.create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\api_manager.py\", line 55, in create_chat_completion\r\n response = openai.ChatCompletion.create\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\chat_completion.py\", line 25, in create\r\n return super().create(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\abstract\\engine_api_resource.py\", line 153, in create\r\n response, _, api_key = requestor.request(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 226, in request\r\n resp, got_stream = self._interpret_response(result, stream)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 619, in _interpret_response\r\n self._interpret_response_line(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 682, in _interpret_response_line\r\n raise self.handle_error_response(\r\nopenai.error.AuthenticationError: <empty message>\r\n\r\n### Current behavior \ud83d\ude2f\r\n\r\nCrashes multiple times. Open_API_key has been provided. Restarted virtual environment a couple of times.\r\nNB! Tried to start AutoGPT both with Windows Python3.10 way and via Docker. In both cases can't start start search and receive immediately error (below) - openai.error.AuthenticationError: <empty message>\r\n\r\n### Expected behavior \ud83e\udd14\r\n\r\nStarts correctly\r\n\r\n### Your prompt \ud83d\udcdd\r\n\r\n```AI Name: MacGPT\r\nMacGPT here! I am at your service.\r\nDescribe your AI's role: For example, 'an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.'\r\n Asking user via keyboard...\r\nMacGPT is: Search for Big Mc prices in EU countries\r\nEnter up to 5 goals for your AI: For example: Increase net worth, Grow Twitter Account, Develop and manage multiple businesses autonomously'\r\n Enter nothing to load defaults, enter nothing when finished.\r\n Asking user via keyboard...\r\nGoal 1: Conduct a thorough and accurate search of BigMc prices across EU countries\r\n Asking user via keyboard...\r\nGoal 2: Provide price per each EU capital\r\n Asking user via keyboard...\r\nGoal 3: Ensure that the information provided is up-to-date and accurate\r\n Asking user via keyboard...\r\nGoal 4: Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n Asking user via keyboard...\r\nGoal 5: Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nEnter your budget for API calls: For example: $1.50\r\n Enter nothing to let the AI run without monetary limit\r\n Asking user via keyboard...\r\nBudget: $1\r\nMacGPT has been created with the following details:\r\nName: MacGPT\r\nRole: Search for Big Mc prices in EU countries\r\nGoals:\r\n- Conduct a thorough and accurate search of BigMc prices across EU countries\r\n- Provide price per each EU capital\r\n- Ensure that the information provided is up-to-date and accurate\r\n- Continuously improve the search algorithm to increase the accuracy and efficiency of the search process.\r\n- Do not crash ang give error - \"openai.error.AuthenticationError: <empty message>\"\r\nUsing memory of type: LocalCache\r\nUsing Browser: chrome\r\n```\r\n\r\n\r\n### Your Logs \ud83d\udcd2\r\n\r\n```log\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\user\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 196, in _run_module_as_main\r\n return _run_code(code, main_globals, None,\r\n File \"C:\\Users\\makkolev\\AppData\\Local\\Programs\\Python\\Python310\\lib\\runpy.py\", line 86, in _run_code\r\n exec(code, run_globals)\r\n File \"C:\\agpt\\autogpt\\__main__.py\", line 5, in <module>\r\n autogpt.cli.main()\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1130, in __call__\r\n return self.main(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1055, in main\r\n rv = self.invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1635, in invoke\r\n rv = super().invoke(ctx)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 1404, in invoke\r\n return ctx.invoke(self.callback, **ctx.params)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\core.py\", line 760, in invoke\r\n return __callback(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\click\\decorators.py\", line 26, in new_func\r\n return f(get_current_context(), *args, **kwargs)\r\n File \"C:\\agpt\\autogpt\\cli.py\", line 90, in main\r\n run_auto_gpt(\r\n File \"C:\\agpt\\autogpt\\main.py\", line 186, in run_auto_gpt\r\n agent.start_interaction_loop()\r\n File \"C:\\agpt\\autogpt\\agent\\agent.py\", line 113, in start_interaction_loop\r\n assistant_reply = chat_with_ai(\r\n File \"C:\\agpt\\autogpt\\llm\\chat.py\", line 244, in chat_with_ai\r\n assistant_reply = create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\llm_utils.py\", line 166, in create_chat_completion\r\n response = api_manager.create_chat_completion(\r\n File \"C:\\agpt\\autogpt\\llm\\api_manager.py\", line 55, in create_chat_completion\r\n response = openai.ChatCompletion.create(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\chat_completion.py\", line 25, in create\r\n return super().create(*args, **kwargs)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_resources\\abstract\\engine_api_resource.py\", line 153, in create\r\n response, _, api_key = requestor.request(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 226, in request\r\n resp, got_stream = self._interpret_response(result, stream)\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 619, in _interpret_response\r\n self._interpret_response_line(\r\n File \"C:\\agpt\\autogpt_env\\lib\\site-packages\\openai\\api_requestor.py\", line 682, in _interpret_response_line\r\n raise self.handle_error_response(\r\nopenai.error.AuthenticationError: <empty message>\r\nPress any key to continue . . .\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "iss_html_url": "https://github.com/fastapi/fastapi/issues/5424", "iss_label": "question\nanswered\nquestion-migrate", "title": "How to identify query params with keys only and no value", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\n@router.get(\"/events\")\r\ndef get_alerts(request: Request)\r\n params = request.query_params\n```\n\n\n### Description\n\nI want to handle a use case where I want to handle a use case where if a query param is passed but no value is set, I would return a specific message. I want a different behavior when to when it is not passed at all.\r\n\r\nI tried using request.query_params but it doesn't get the Key in the request as well.\r\n\r\nPostman request looks like this:\r\n<img width=\"805\" alt=\"image\" src=\"https://user-images.githubusercontent.com/104721284/192010955-160c2418-63f3-46ac-9f64-a416b92c03ae.png\">\r\n\r\n\r\n\n\n### Operating System\n\nmacOS\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.70.0\n\n### Python Version\n\n3.9\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [20], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "iss_html_url": "https://github.com/fastapi/fastapi/issues/5425", "iss_label": "question\nanswered\nquestion-migrate", "title": "Error while opening swagger docs while uploading file in APIRouter", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nrouter = APIRouter(\r\n prefix='/predict',\r\n tags=[\"Prediction\"],\r\n responses={404: {\"description\": \"Not Found\"}}\r\n)\r\n\r\n\r\n@router.post(\"/\")\r\nasync def predict(file: UploadFile = File(...)):\r\n extension = file.filename.split(\".\")[-1] in (\"jpg\", \"jpeg\", \"png\")\r\n if not extension:\r\n raise HTTPException(status_code=400, detail=\"File Format Error : Uploaded file must be a JPG, JPEG or PNG file\")\r\n image = read_image_file(await file.read())\r\n result = predict_pneumonia(image)\r\n if result > 0.6:\r\n return JSONResponse(content={\"prediction\": \"pneumonia\"})\r\n return JSONResponse(content={\"prediction\": \"no pneumonia\"})\r\n```\r\n\r\n\r\n### Description\r\n\r\nI am just trying to create a ML prediction application using FastAPI. While uploading images, swagger docs doesn't load and its showing the below mentioned error. But the endpoint works perfectly when tried with Postman.\r\n\r\n![image](https://user-images.githubusercontent.com/58306412/192039571-1eed5f98-cd67-49ec-97ec-364b28ace0f9.png)\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 404, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\uvicorn\\middleware\\proxy_headers.py\", line 78, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 270, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\applications.py\", line 124, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 184, in __call__\r\n raise exc\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 75, in __call__\r\n raise exc\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\middleware\\exceptions.py\", line 64, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 21, in __call__\r\n raise e\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\middleware\\asyncexitstack.py\", line 18, in __call__\r\n await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 680, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 275, in handle\r\n await self.app(scope, receive, send)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\starlette\\routing.py\", line 65, in app\r\n response = await func(request)\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 225, in openapi\r\n return JSONResponse(self.openapi())\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\applications.py\", line 200, in openapi\r\n self.openapi_schema = get_openapi(\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\openapi\\utils.py\", line 423, in get_openapi\r\n definitions = get_model_definitions(\r\n File \"D:\\Programming_Languages\\Anaconda\\envs\\Medaignostic-Playground\\lib\\site-packages\\fastapi\\utils.py\", line 39, in get_model_definitions\r\n model_name = model_name_map[model]\r\nKeyError: <class 'pydantic.main.Body_predict_predict__post'>\r\n```\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.85.0\r\n\r\n### Python Version\r\n\r\n3.9\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c6aa28bea2f751a91078bd8d845133ff83f352bf', 'files': [{'path': 'fastapi/routing.py', 'Loc': {\"('APIRouter', 'add_api_route', 513)\": {'mod': [593]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/routing.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "c6aa28bea2f751a91078bd8d845133ff83f352bf", "iss_html_url": "https://github.com/fastapi/fastapi/issues/5422", "iss_label": "question\nquestion-migrate", "title": "Unidirectional websocket connections where only the server pushes data to the clients", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\n@app.websocket(\"/ws\")\r\nasync def websocket_endpoint(websocket: WebSocket):\r\n await websocket.accept()\r\n while True:\r\n data = await websocket.receive_text()\r\n await websocket.send_text(f\"Message text was: {data}\")\n```\n\n\n### Description\n\nHello,\r\nIs there a way I could send data to clients over websocket without listening for when clients send data back. I'm trying to have a websocket endpoint where the server is pushing data to the client in a unidirectional way without the option for the client to send responses back. There doesn't seem to be any code that I could find that supports this since all the documentation seems to require that the server is listening for a `websocket.recieve_text()`. Any help would be much appreciated, thanks.\n\n### Operating System\n\nLinux\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.81.0\n\n### Python Version\n\n3.8.13\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [23], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "55afb70b3717969565499f5dcaef54b1f0acc7da", "iss_html_url": "https://github.com/fastapi/fastapi/issues/891", "iss_label": "question\nanswered\nquestion-migrate", "title": "SQL related tables and corresponding nested pydantic models in async", "body": "Really impressed with FastAPI so far... I have search docs github, tickets and googled the issue described below.\r\n\r\n### Description\r\n\r\nHow best to work with related tables and corresponding nested pydantic models whilst persisting data in a relational database in an async application?\r\n\r\n### Additional context\r\n\r\nI have been attempting to extend the example in the docs \r\nhttps://fastapi.tiangolo.com/advanced/async-sql-databases/\r\nwhich relies on https://github.com/encode/databases\r\n\r\nUsing three test pydantic models as an example:\r\n\r\n```\r\nclass UserModel(BaseModel):\r\n id: int\r\n title: str = Field(..., min_length=2, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n username: str = Field(..., min_length=3, max_length=50)\r\n email: str = Field(..., min_length=3, max_length=50)\r\n favourite_book: int = Field(...)\r\n\r\nclass FavouriteBook(BaseModel):\r\n id: int\r\n title: str = Field(...)\r\n author: str = Field(...)\r\n\r\n\r\nclass ExtendedUser(BaseModel):\r\n id: int\r\n title: str = Field(..., min_length=2, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n firstname: str = Field(..., min_length=1, max_length=50)\r\n username: str = Field(..., min_length=3, max_length=50)\r\n email: str = Field(..., min_length=3, max_length=50)\r\n favourite_book: FavouriteBook\r\n\r\n```\r\n\r\nthe route would ideally be along the lines of...\r\n\r\n```\r\n@router.get(\"/extended\", response_model=List[ExtendedUser])\r\nasync def list():\r\n query = **sqlAlchemy/databases call that works**\r\n return database.fetch_all(query=query)\r\n\r\n```\r\n\r\n\r\nHow can a user create a route that returns the nested ExtendedUser from the database without resorting to performing two queries? \r\nAn SQL join is a standard way to do this with a single query. However, this does not work with SQLAlchemy core as the two tables contain 'id' and 'title' columns. \r\nIt is possible to work with SQLAlchemy orm - but not in an async way as far as I know. (async is my reason for using FastAPI ). I could rename the columns to something unique ( but to rename 'id' column seems like poor database design to me).\r\n\r\n\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [31], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "1760da0efa55585c19835d81afa8ca386036c325", "iss_html_url": "https://github.com/fastapi/fastapi/issues/3882", "iss_label": "question\nquestion-migrate", "title": "Doing work after the HTTP response has been sent", "body": "### First Check\n\n- [X] I added a very descriptive title to this issue.\n- [X] I used the GitHub search to find a similar issue and didn't find it.\n- [X] I searched the FastAPI documentation, with the integrated search.\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\n\n### Commit to Help\n\n- [X] I commit to help with one of those options \ud83d\udc46\n\n### Example Code\n\n```python\nfrom fastapi import FastAPI, Request\r\n\r\napp = FastAPI()\r\n\r\n@app.middleware(\"http\")\r\nasync def write_log(request: Request, call_next):\r\n response = await call_next(request)\r\n # write log\r\n return response\n```\n\n\n### Description\n\nI want to log data for each request, however since my application is latency sensitive, I would want to return as quickly as possible. Is there an equivalent to Symfony's \"[terminate](https://symfony.com/doc/current/reference/events.html#kernel-terminate)\" event (which I guess is the `request_finished` signal in Django)? The idea is to do the log writing after the HTTP response has been sent.\r\n\r\nThe above code is from the middleware documentation, but it basically means the code for writing the log will be executed before the response is sent.\n\n### Operating System\n\nLinux\n\n### Operating System Details\n\n_No response_\n\n### FastAPI Version\n\n0.65.1\n\n### Python Version\n\n3.8.5\n\n### Additional Context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1760da0efa55585c19835d81afa8ca386036c325', 'files': [{'path': 'fastapi/background.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/background.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "a0e4d38bea74940de013e04a6d6f399d62f04280", "iss_html_url": "https://github.com/fastapi/fastapi/issues/1498", "iss_label": "question\nreviewed\nquestion-migrate", "title": "RedirectResponse from a POST request route to GET request route shows 405 Error code.", "body": "_Summary of the total issue is:_ **How to do a Post/Redirect/Get (PRG) in FastAPI?**\r\n\r\n_This is not necessarily a bug, rather a question._\r\n### Things i tried:\r\nI want to redirect response from 2nd route to 1st route. This [Issue#199](https://github.com/tiangolo/fastapi/issues/199) here explains **GET to GET** but not a **POST to GET**. **N.B:** `I have done this type of POST -> GET redirecting in flask, it was working there but not here.` And also this [Issue#863](https://github.com/tiangolo/fastapi/issues/863) has the same problem but doesn't really solves the problem. To re produce the error check the bottom.\r\n\r\n```Python3\r\n#1st route (GET request)\r\n@admin_content_edit_router.get('/admin/edit_content/set_category')\r\nasync def set_category(request:Request):\r\n return templates.TemplateResponse(\"admin/category_edit.html\", {'request': request})\r\n\r\n#2nd route (POST request)\r\n@admin_content_edit_router.post('/admin/edit_content/add_category')\r\nasync def add_category(request:Request):\r\n # here forms are getting processed\r\n return RedirectResponse(app.url_path_for('set_category')) # from here to 1st route\r\n```\r\nBut it shows :\r\n```Python3\r\n {\"detail\":\"Method Not Allowed\"}\r\n```\r\nFull traceback:\r\n```Python3\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/add_category HTTP/1.1\" 307 Temporary Redirect\r\nINFO: 127.0.0.1:58415 - \"POST /admin/edit_content/set_category HTTP/1.1\" 405 Method Not Allowed\r\nERROR: Exception in callback _SelectorSocketTransport._read_ready()\r\nhandle: <Handle _SelectorSocketTransport._read_ready()>\r\nTraceback (most recent call last):\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\events.py\", line 145, in _run\r\n self._callback(*self._args)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 730, in _read_ready\r\n self._protocol.data_received(data)\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 162, in data_received\r\n self.handle_events()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\site-packages\\uvicorn\\protocols\\http\\h11_impl.py\", line 247, in handle_events\r\n self.transport.resume_reading()\r\n File \"c:\\users\\aminp\\appdata\\local\\programs\\python\\python36\\lib\\asyncio\\selector_events.py\", line 711, in resume_reading\r\n raise RuntimeError('Not paused')\r\nRuntimeError: Not paused\r\n```\r\n\r\nBut when i do a GET to GET redirect response it works without any issue but a POST to GET blows things up! Am i completely missing something here? i did look up in starlette doc here on reverse route lookup but nothing helps. [https://www.starlette.io/routing/#reverse-url-lookups](url)\r\n\r\nQuick Re produce the error:\r\n```Python3\r\n\r\nfrom fastapi import FastAPI\r\nfrom starlette.responses import RedirectResponse\r\nimport os\r\nfrom starlette.status import HTTP_302_FOUND,HTTP_303_SEE_OTHER\r\n\r\napp = FastAPI()\r\n\r\n@app.post(\"/\")\r\nasync def login():\r\n # HTTP_302_FOUND,HTTP_303_SEE_OTHER : None is working:(\r\n return RedirectResponse(url=\"/ressource/1\",status_code=HTTP_303_SEE_OTHER)\r\n\r\n@app.get(\"/ressource/{r_id}\")\r\nasync def get_ressource(r_id:str):\r\n return {\"r_id\": r_id}\r\n\r\nif __name__ == '__main__':\r\n os.system(\"uvicorn tes:app --host 0.0.0.0 --port 80\")\r\n```\r\n\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'a0e4d38bea74940de013e04a6d6f399d62f04280', 'files': [{'Loc': [58], 'path': None}]}", "own_code_loc": [{"Loc": [58], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "b93f8a709ab3923d1268dbc845f41985c0302b33", "iss_html_url": "https://github.com/fastapi/fastapi/issues/4551", "iss_label": "question\nquestion-migrate", "title": "Attribute not found while testing a Beanie Model inside fast api", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [x] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nMy Code:\r\n\r\n\r\nMy Route:\r\n\r\n@router.post(\"/login\")\r\nasync def internalLogin(request: Request,\r\n email: str = Form(...),\r\n password: str = Form(...)):\r\n try:\r\n res, token = await Controller.internalLogin(email=email, password=password)\r\n if res:\r\n return {\"message\": \"Success\"}\r\n else:\r\n return {\"message\": \"Failure\"}\r\n except DocumentNotFound as documentNotFoundException:\r\n return {\"message\": \"Error\"}\r\n```\r\n\r\nController:\r\n```\r\n@staticmethod\r\n async def internalLogin(email: str, password: str) -> List[bool | str]:\r\n logger.info(message=\"Inside OpenApi Controller\", fileName=__name__, functionName=\"OpenApiController\")\r\n try:\r\n user = await internalUserDb(email=email)\r\n if user is not None and user.verifyPassword(password):\r\n print(\"Logged In\")\r\n return [True, \"\"]\r\n else:\r\n print(\"Failed)\r\n return [False, \"\"]\r\n except DocumentNotFound as documentNotFound:\r\n raise documentNotFound\r\n\r\n```\r\n\r\nDB:\r\n\r\n```\r\nasync def internalUserDb(email: str) -> InternalUserModel:\r\n try:\r\n user: InternalUserModel = await InternalUserModel.find_one(InternalUserModel.email == email)\r\n return user\r\n except DocumentNotFound as documentNotFound:\r\n raise documentNotFound\r\n```\r\n\r\nMy TestCode:\r\n\r\n```\r\n@pytest.mark.anyio\r\nasync def testLogin():\r\n response = await asyncClient.post(\"/internalLogin\",\r\n data={\"email\": \"sample@mail.com\", \"password\": \"samplePass\"})\r\n assert response.status_code == 303\r\n```\r\n\r\nMy error while testing is: \r\n\r\n```\r\nFAILED Tests/TestLogin.py::testLogin[asyncio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'\r\nFAILED Tests/TestLogin.py::testLogin[trio] - AttributeError: type object 'InternalUserModel' has no attribute 'email'\r\n```\r\n\r\n\r\n### Description\r\n\r\nHello, I am new to FastAPI. I am trying to test the fast api with PyTest. Normal tests are working perfectly fine but I am using MongoDB as backend to store my data. While I try to test the route that does some data fetching from database it shows error like `attribute not inside the model`. I am using Beanie ODM for MongoDB.\r\n\r\n### Operating System\r\n\r\nmacOS\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.73\r\n\r\n### Python Version\r\n\r\n3.10\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b93f8a709ab3923d1268dbc845f41985c0302b33', 'files': [{'path': 'docs/en/docs/advanced/testing-events.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["docs/en/docs/advanced/testing-events.md"], "test": [], "config": [], "asset": []}} +{"organization": "fastapi", "repo_name": "fastapi", "base_commit": "78b07cb809e97f400e196ff3d89862b9d5bd5dc2", "iss_html_url": "https://github.com/fastapi/fastapi/issues/4587", "iss_label": "question\nquestion-migrate", "title": "Use the raw response in Reponse classes", "body": "### First Check\r\n\r\n- [X] I added a very descriptive title to this issue.\r\n- [X] I used the GitHub search to find a similar issue and didn't find it.\r\n- [X] I searched the FastAPI documentation, with the integrated search.\r\n- [X] I already searched in Google \"How to X in FastAPI\" and didn't find any information.\r\n- [X] I already read and followed all the tutorial in the docs and didn't find an answer.\r\n- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).\r\n- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).\r\n- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).\r\n\r\n### Commit to Help\r\n\r\n- [X] I commit to help with one of those options \ud83d\udc46\r\n\r\n### Example Code\r\n\r\n```python\r\nclass CustomEncoder():\r\n def encode(self, dict_data)\r\n return dict_data\r\n\r\nclass PhotonJSONResponse(JSONResponse):\r\n def __init__(self, content: typing.Any = None, status_code: int = 200, headers: dict = None, media_type: str = None,\r\n background: BackgroundTask = None) -> None:\r\n # Fetch the untouched response in the upper stacks\r\n current_frame = inspect.currentframe()\r\n self.raw_response = None\r\n while current_frame.f_back:\r\n if 'raw_response' in current_frame.f_locals:\r\n self.raw_response = current_frame.f_locals['raw_response']\r\n break\r\n current_frame = current_frame.f_back\r\n \r\n self._encoder = CustomEncoder()\r\n super().__init__(content, status_code, headers, media_type, background)\r\n\r\n def render(self, content: Any) -> bytes:\r\n dict_data = self._encoder.encode(self.raw_response)\r\n return super().render(dict_data)\r\n```\r\n\r\n\r\n### Description\r\n\r\nI want to access the raw response that hasn't been through the json_encoder inside my response class. This is because I have custom types that are handled in a custom encoder. I have looked through the relevant fastapi code and I can't find a way to override the encoder for all requests either. As you can see in the example code I currently use reflection to fetch the raw_response in the upper stack frame, however this is not very reliable. I also can't seem to do this using an APIRoute implementation because it would require re-implementing the route handler which is messy, maybe it would be more relevant in there though.\r\n\r\n### Operating System\r\n\r\nWindows\r\n\r\n### Operating System Details\r\n\r\n_No response_\r\n\r\n### FastAPI Version\r\n\r\n0.63.0\r\n\r\n### Python Version\r\n\r\n3.8.12\r\n\r\n### Additional Context\r\n\r\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '78b07cb809e97f400e196ff3d89862b9d5bd5dc2', 'files': [{'path': 'fastapi/routing.py', 'Loc': {\"('APIRoute', None, 300)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["fastapi/routing.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/3341", "iss_label": "bug", "title": "state isn't clearly understood how to incorporate for script.py", "body": "### Describe the bug\n\nI see that output_modifier and a few other functions require state object, which is not defined in script.py nor are any of the existing plugins (that I looked at) use a state object.\r\n\r\nAs a result, I am unable to use the functions. I get a message about needing to pass state\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\ntry to use this snippet\r\n\r\nhttps://github.com/ChobPT/oobaboogas-webui-langchain_agent/blob/main/script.py#L185-L190\r\n\r\n```\r\ndef input_modifier(string):\r\n if string[:3] == \"/do Story\":\r\n print('hi')\r\n string += ' Tell me a story.'\r\n else:\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0])\r\n return string.replace('/do ', '')\r\n\r\n```\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nFile \"/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py\", line 144, in input_modifier\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0],state_dict)\r\nNameError: name 'state_dict' is not defined\r\n\r\n```\r\n```\r\n File \"/home/user/oobabooga_linux/text-generation-webui/extensions/helloworld/script.py\", line 144, in input_modifier\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0],state)\r\nNameError: name 'state' is not defined\r\n\r\n```\r\n\r\n```\r\n output_modifier(string.split(\"###\")[0].split(\"Human:\")[0])\r\nTypeError: output_modifier() missing 1 required positional argument: 'state'\r\n\r\n```\r\n\r\nand if I removed state from output_modifier (as you see in my snippet above w print) I get no modified output nor print statement at console\r\nOutput generated in 1.99 seconds (9.06 tokens/s, 18 tokens, context 66, seed 123523724)\r\nTraceback (most recent call last):\r\n File \"/home/user/oobabooga_linux/text-generation-webui/server.py\", line 1181, in <module>\r\n time.sleep(0.5)\n```\n\n\n### System Info\n\n```shell\npython 3.9 oracle linux 8.5\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ecd92d6a4e9a7c74d2bf436571f2c7e96beb9f92', 'files': []}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "ChobPT", "pro": "oobaboogas-webui-langchain_agen", "path": ["script.py"]}], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["script.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "8962bb173e9bdc36eb9cf28fe9e1952b2976e781", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5337", "iss_label": "bug", "title": "Generation slows at max context, even when truncated", "body": "### Describe the bug\r\n\r\n### Issue Summary\r\nWhen generating, if the context is near the maximum set via n_ctx (and the truncate value in Parameters is set to match it), generation will be quite slow. This does not occur if the context is more than approximately 300-500 below the set value. It still occurs even if the n_ctx and truncation numbers are reduced (though the slowdown becomes less severe.\r\n\r\n### Observations\r\n\r\n- Since speed is perfectly fine up until we near the context limit, then immediately drops, I suspect this has something to do with how the context is truncated; the actual act of truncating the input seems to cause the slowdown, despite the fact that this should be a simple operation.\r\n- Increasing the limit back up after lowering also does not help;; makes sense, since it just pulls in as much of the conversation as will fit and hits the context limit again, requiring truncation. \r\n\r\n### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Reproduction\r\n\r\n- Set your n_ctx to a given value. (In my case, 8192).\r\n- Chat with the model, noting the speed. At this point, it should be fairly rapid. (In my case, 4.72 tokens/s up to context 7792).\r\n- As soon as the context reaches approximately 7800, generation slows. (In my case, 0.87 tokens/s on the message immediately after the above, at context 7798).\r\n- At this point, reducing n_ctx and reloading the model only partially helps. (In my case; reducing to 4092 produced 2.51 tokens/s at context 3641.\r\n\r\n### Screenshot\r\n\r\n_No response_\r\n\r\n### Logs\r\n\r\n```shell\r\nN/A\r\n```\r\n\r\n\r\n### System Info\r\n\r\n```shell\r\n- Model: TheBloke/Silicon-Maid-7B-GGUF, using the 5_K_M quant.\r\n- Branch: dev\r\n- Commit: 8962bb173e9bdc36eb9cf28fe9e1952b2976e781\r\n- OS: Windows 11\r\n```\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8962bb173e9bdc36eb9cf28fe9e1952b2976e781', 'files': [{'path': 'modules/ui_model_menu.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui_model_menu.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "564a8c507fffc8b25a056d8930035c63da71fc7b", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/3042", "iss_label": "bug", "title": "ERROR:Task exception was never retrieved", "body": "### Describe the bug\n\nRight after installation i open the webui in the browser and i receive an error.\n\n### Is there an existing issue for this?\n\n- [x] I have searched the existing issues\n\n### Reproduction\n\nRight after installation i open the webui in the browser and i receive this error.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\n2023-07-07 21:25:11 ERROR:Task exception was never retrieved\r\nfuture: <Task finished name='3s4vbrhqz8a_103' coro=<Queue.process_events() done, defined at D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py:343> exception=1 validation error for PredictBody\r\nevent_id\r\n Field required [type=missing, input_value={'data': [], 'event_data'...on_hash': '3s4vbrhqz8a'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.1.2/v/missing>\r\nTraceback (most recent call last):\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 347, in process_events\r\n client_awake = await self.gather_event_data(event)\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 220, in gather_event_data\r\n data, client_awake = await self.get_message(event, timeout=receive_timeout)\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\gradio\\queueing.py\", line 456, in get_message\r\n return PredictBody(**data), True\r\n File \"D:\\oobabooga\\oobabooga_windows\\installer_files\\env\\lib\\site-packages\\pydantic\\main.py\", line 150, in __init__\r\n __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)\r\npydantic_core._pydantic_core.ValidationError: 1 validation error for PredictBody\r\nevent_id\r\n Field required [type=missing, input_value={'data': [], 'event_data'...on_hash': '3s4vbrhqz8a'}, input_type=dict]\r\n For further information visit https://errors.pydantic.dev/2.1.2/v/missing\n```\n\n\n### System Info\n\n```shell\nWindows 11\r\nEVGA RTX3080\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '564a8c507fffc8b25a056d8930035c63da71fc7b', 'files': [{'path': 'requirements.txt', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "07510a24149cbd6fd33df0c4a440d60b9783a18e", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/2171", "iss_label": "enhancement\nstale", "title": "support for fastest-inference-4bit branch of GPTQ-for-LLaMa", "body": "**Description**\r\n\r\nThere is new branch of GPTQ-for-LLaMa - fastest-inference-4bit that combines triton and cuda and people say it's much faster. It would be nice if it was supported here. I tried to compile it myself but it doesnt work with this webui because there is no llama_inference_offload.py in the new branch. \r\n\r\n**Additional Context**\r\nhttps://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-inference-4bit\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '07510a24149cbd6fd33df0c4a440d60b9783a18e', 'files': [{'path': 'modules/GPTQ_loader.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/GPTQ_loader.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "7ddf6147accfb5b95e7dbbd7f1822cf976054a2a", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/446", "iss_label": "bug", "title": "Factual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047", "body": "### Describe the bug\n\nI get factual answers in ?? like this Factual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nCommon sense questions and answers\r\n\r\nQuestion: Hi\r\nFactual answer: \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047 \u2047\n\n### Screenshot\n\n<img width=\"1535\" alt=\"Screenshot 2023-03-20 at 12 43 35 AM\" src=\"https://user-images.githubusercontent.com/25454015/226214371-e9424c75-6b81-4189-9865-70446b62235d.png\">\r\n\n\n### Logs\n\n```shell\nLoading LLaMA-7b...\r\nLoading checkpoint shards: 100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 33/33 [00:06<00:00, 5.47it/s]\r\nLoaded the model in 147.25 seconds.\r\nOutput generated in 12.96 seconds (4.71 tokens/s, 61 tokens)\r\nOutput generated in 13.20 seconds (0.61 tokens/s, 8 tokens)\n```\n\n\n### System Info\n\n```shell\nMacOS Ventura 13.2.1, Apple M1 Max\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7ddf6147accfb5b95e7dbbd7f1822cf976054a2a', 'files': [{'path': 'download-model.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2\n\u7ed3\u679c\u5947\u602a", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["download-model.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "3609ea69e4c4461a4f998bd12cc559d5a016f328", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5761", "iss_label": "bug", "title": "api broke: AttributeError: 'NoneType' object has no attribute 'replace'", "body": "### Describe the bug\n\napi calls result in\r\nAttributeError: 'NoneType' object has no attribute 'replace'\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\ninstall no requirements and llama-cpp-python by source then try to run curl\r\n\r\ncurl http://192.168.3.17:5000/v1/chat/completions \\\r\n -H \"Content-Type: application/json\" \\\r\n -d '{'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'tell me a story.'}], 'max_new_tokens': 1024, 'preset': 'None', 'do_sample': False, 'temperature': 1.0, 'top_p': 0, 'typical_p': 1, 'epsilon_cutoff': 0, 'eta_cutoff': 0, 'tfs': 1, 'top_a': 0, 'repetition_penalty': 1.18, 'repetition_penalty_range': 0, 'top_k': 50, 'min_length': 0, 'no_repeat_ngram_size': 2, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': True, 'mirostat_mode': 0, 'mirostat_tau': 5, 'mirostat_eta': 0.1, 'seed': -1, 'add_bos_token': True, 'truncation_length': 1068, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [], 'mode': 'instruct', 'instruction_template': 'Alpaca'}'\r\n\r\nException in ASGI application\r\nTraceback (most recent call last):\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 411, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 69, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/applications.py\", line 1054, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/applications.py\", line 123, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 186, in __call__\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 164, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/cors.py\", line 83, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 62, in __call__\r\n await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 758, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 778, in app\r\n await route.handle(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 299, in handle\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 79, in app\r\n await wrap_app_handling_exceptions(app, request)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 74, in app\r\n response = await func(request)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 278, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/data/text-generation-webui/extensions/openai/script.py\", line 137, in openai_chat_completions\r\n response = OAIcompletions.chat_completions(to_dict(request_data), is_legacy=is_legacy)\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 536, in chat_completions\r\n return deque(generator, maxlen=1).pop()\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 315, in chat_completions_common\r\n prompt = generate_chat_prompt(user_input, generate_params)\r\n File \"/data/text-generation-webui/modules/chat.py\", line 97, in generate_chat_prompt\r\n user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),\r\n File \"/data/text-generation-webui/modules/chat.py\", line 636, in replace_character_names\r\n text = text.replace('{{user}}', name1).replace('{{char}}', name2)\r\nAttributeError: 'NoneType' object has no attribute 'replace'\r\n\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\ninstall no avx2 requirements and llama-cpp-python by source then try to run curl\r\n\r\ncurl http://192.168.3.17:5000/v1/chat/completions \\\r\n -H \"Content-Type: application/json\" \\\r\n -d '{'messages': [{'role': 'system', 'content': 'You are a helpful assistant.'}, {'role': 'user', 'content': 'tell me a story.'}], 'max_new_tokens': 1024, 'preset': 'None', 'do_sample': False, 'temperature': 1.0, 'top_p': 0, 'typical_p': 1, 'epsilon_cutoff': 0, 'eta_cutoff': 0, 'tfs': 1, 'top_a': 0, 'repetition_penalty': 1.18, 'repetition_penalty_range': 0, 'top_k': 50, 'min_length': 0, 'no_repeat_ngram_size': 2, 'num_beams': 1, 'penalty_alpha': 0, 'length_penalty': 1, 'early_stopping': True, 'mirostat_mode': 0, 'mirostat_tau': 5, 'mirostat_eta': 0.1, 'seed': -1, 'add_bos_token': True, 'truncation_length': 1068, 'ban_eos_token': False, 'skip_special_tokens': True, 'stopping_strings': [], 'mode': 'instruct', 'instruction_template': 'Alpaca'}'\r\n\r\nException in ASGI application\r\nTraceback (most recent call last):\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/protocols/http/httptools_impl.py\", line 411, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/uvicorn/middleware/proxy_headers.py\", line 69, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/applications.py\", line 1054, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/applications.py\", line 123, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 186, in __call__\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/errors.py\", line 164, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/cors.py\", line 83, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/middleware/exceptions.py\", line 62, in __call__\r\n await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 758, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 778, in app\r\n await route.handle(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 299, in handle\r\n await self.app(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 79, in app\r\n await wrap_app_handling_exceptions(app, request)(scope, receive, send)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 64, in wrapped_app\r\n raise exc\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/_exception_handler.py\", line 53, in wrapped_app\r\n await app(scope, receive, sender)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/starlette/routing.py\", line 74, in app\r\n response = await func(request)\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 278, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/root/miniconda3/envs/textgen/lib/python3.10/site-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/data/text-generation-webui/extensions/openai/script.py\", line 137, in openai_chat_completions\r\n response = OAIcompletions.chat_completions(to_dict(request_data), is_legacy=is_legacy)\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 536, in chat_completions\r\n return deque(generator, maxlen=1).pop()\r\n File \"/data/text-generation-webui/extensions/openai/completions.py\", line 315, in chat_completions_common\r\n prompt = generate_chat_prompt(user_input, generate_params)\r\n File \"/data/text-generation-webui/modules/chat.py\", line 97, in generate_chat_prompt\r\n user_bio=replace_character_names(state['user_bio'], state['name1'], state['name2']),\r\n File \"/data/text-generation-webui/modules/chat.py\", line 636, in replace_character_names\r\n text = text.replace('{{user}}', name1).replace('{{char}}', name2)\r\nAttributeError: 'NoneType' object has no attribute 'replace'\n```\n\n\n### System Info\n\n```shell\noracle linux 8, rocky linux 9\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '3609ea69e4c4461a4f998bd12cc559d5a016f328', 'files': [{'path': 'modules/chat.py', 'Loc': {\"(None, 'replace_character_names', 637)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/chat.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/5774", "iss_label": "bug", "title": "The checksum verification for miniconda_installer.exe has failed.", "body": "### Describe the bug\n\nThe checksum verification for miniconda_installer.exe has failed.\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\nAfter I extracted the files, I clicked start_windows.bat.\n\n### Screenshot\n\n_No response_\n\n### Logs\n\n```shell\nDownloading Miniconda from https://repo.anaconda.com/miniconda/Miniconda3-py310_23.3.1-0-Windows-x86_64.exe to D:\\text-generation-webui\\installer_files\\miniconda_installer.exe\r\n % Total % Received % Xferd Average Speed Time Time Time Current\r\n Dload Upload Total Spent Left Speed\r\n100 53.8M 100 53.8M 0 0 23.2M 0 0:00:02 0:00:02 --:--:-- 23.3M\r\nfind: '/i': No such file or directory\r\nfind: '/v': No such file or directory\r\nfind: ' ': No such file or directory\r\nfind: '/i': No such file or directory\r\nfind: '307194e1f12bbeb52b083634e89cc67db4f7980bd542254b43d3309eaf7cb358': No such file or directory\r\nThe checksum verification for miniconda_installer.exe has failed.\n```\n\n\n### System Info\n\n```shell\nwindows11,CPU:i711800H,GPU:NVDIA RTXA2000Laptop\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1a7c027386f43b84f3ca3b0ff04ca48d861c2d7a', 'files': [{'path': 'start_windows.bat', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["start_windows.bat"]}} +{"organization": "oobabooga", "repo_name": "text-generation-webui", "base_commit": "c17624432726ab5743dfa21af807d559e4f4ff8c", "iss_html_url": "https://github.com/oobabooga/text-generation-webui/issues/6209", "iss_label": "bug\nstale", "title": "Oobabooga login not working through reverse proxy", "body": "### Describe the bug\n\nI have the latest text-generation-webui (just ran the update script) running on my home computer running Windows 11. I am running it on a LAN IP (192.168.1.102) and reverse-proxying it with Nginx so I can access it remotely over the Internet.\r\n\r\nSome recent update to text-generation-webui appears to have broken the login code. When I'm logging in from the LAN, I see the normal login screen, and authentication works. When I'm logging in from the WAN, I get a bare-bones UI which refuses to accept my login creds. \r\n\r\nI have been running this setup for months without change, so my assumption is that it's a recent change in the text-generation-webui codebase that's behind it.\r\n\r\nMy CMD_FLAGS.txt is:\r\n\r\n--gradio-auth myusername:mypassword\r\n--auto-devices\r\n--listen\r\n--listen-host 192.168.1.102\r\n--listen-port 7860\r\n\r\n\r\n\n\n### Is there an existing issue for this?\n\n- [X] I have searched the existing issues\n\n### Reproduction\n\n1. Start the webui on a WAN port.\r\n2. Reverse-proxy to a publically-accessible IP.\r\n3. Try to login.\n\n### Screenshot\n\n![Oobaboga_Login](https://github.com/oobabooga/text-generation-webui/assets/13558208/823b2df8-d4e8-43c1-ab93-beb72cf6cae7)\r\n\n\n### Logs\n\n```shell\nI see repeated errors in the console: \"WARNING: invalid HTTP request received\", but no Python trace info.\n```\n\n\n### System Info\n\n```shell\nWindows 11, NVidia Founder RTX 2060 Super.\r\n\r\nReverse proxy is NGinx running on Debian. It uses Let's Encrypt so I can encrypt my remote connection.\n```\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c17624432726ab5743dfa21af807d559e4f4ff8c', 'files': [{'path': 'requirements/full/requirements.txt', 'Loc': {'(None, None, 7)': {'mod': [7]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements/full/requirements.txt"], "asset": []}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "69d863b44ab5c7dad6eea04b7e3563f491c714a4", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/376", "iss_label": "", "title": "Unable to select camera device through UI", "body": "It would be nice to have a way to select which camera to use. I am on Ubuntu 22.04 with a Linux laptop. Since I use an external camera and keep my laptop closed, the program is defaulting to the on-board camera.\r\n\r\nI was unable to find a quick/easy way to change the default camera in Ubuntu, so it would be nice if the program was able to allow a selection in the UI.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '69d863b44ab5c7dad6eea04b7e3563f491c714a4', 'files': [{'path': 'modules/ui.py', 'Loc': {\"(None, 'webcam_preview', 252)\": {'mod': [259]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "080d6f5110d2e185e8ce4e10451ac96313079be2", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/315", "iss_label": "", "title": "How to select the correct camera?", "body": "How to select the correct camera ? \r\nIs there any method to improve the output resolution of the camera?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '080d6f5110d2e185e8ce4e10451ac96313079be2', 'files': [{'path': 'modules/ui.py', 'Loc': {\"(None, 'webcam_preview', 252)\": {'mod': [259]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "5bc3ada6324a28a8d8556da1176b546f2d2140f8", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/922", "iss_label": "", "title": "ERROR: Cannot install -r requirements.txt (line 13), tensorflow and typing-extensions>=4.8.0 because these package versions have conflicting dependencies.", "body": "The conflict is caused by:\n The user requested typing-extensions>=4.8.0\n torch 2.5.1+cu121 depends on typing-extensions>=4.8.0\n tensorflow-intel 2.12.1 depends on typing-extensions<4.6.0 and >=3.6.6", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5bc3ada6324a28a8d8556da1176b546f2d2140f8', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 19)': {'mod': [19]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n\u4f9d\u8d56\u58f0\u660e"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "6b0cc749574d7307b2f7deedfa2a0dbb363329da", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/243", "iss_label": "", "title": "[experimental] doesn't show the camera I want..", "body": "I'm using the `experimental` branch so I could choose the camera I wanted (OBS Virtual Camera) which is (2) but it only shows \"Camera 0\", so I made a test script and I was able to pull my OBS Virtual Camera using 'matplotlib',\r\n\r\n```\r\n(venv) (base) PS E:\\deep-live-cam> python list.py\r\n[ WARN:0@10.769] global cap_msmf.cpp:1769 CvCapture_MSMF::grabFrame videoio(MSMF): can't grab frame. Error: -2147483638\r\n[ WARN:0@10.839] global cap.cpp:304 cv::VideoCapture::open VIDEOIO(DSHOW): raised OpenCV exception:\r\n\r\nOpenCV(4.10.0) D:\\a\\opencv-python\\opencv-python\\opencv\\modules\\videoio\\src\\cap_dshow.cpp:2763: error: (-215:Assertion failed) pVih in function 'videoInput::start'\r\n\r\n\r\n[ERROR:0@10.846] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.478] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.563] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.635] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.711] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.787] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\n[ERROR:0@16.862] global obsensor_uvc_stream_channel.cpp:158 cv::obsensor::getStreamChannelGroup Camera index out of range\r\nAvailable camera indices: [2]\r\nEnter the camera index you want to use: 2\r\nCamera 2 opened successfully. Press 'q' to quit.\r\nPress 'q' and Enter to quit, or just Enter to continue: q\r\n(venv) (base) PS E:\\deep-live-cam>\r\n```\r\n\r\nIt shows up like this:\r\n\r\n<img width=\"419\" alt=\"Screen Shot 2024-08-12 at 8 31 51 PM\" src=\"https://github.com/user-attachments/assets/3f16b4f6-6ac7-492f-88a5-6abdc58e29b0\">\r\n\r\nSo I know it's possible, is there a way to force 'deep-live-cam' to use \"Camera (2)\" ?\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '6b0cc749574d7307b2f7deedfa2a0dbb363329da', 'files': [{'path': 'modules/ui.py', 'Loc': {\"(None, 'webcam_preview', 307)\": {'mod': [322]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "513e41395687921d589fc10bbaf2f72ed579c84a", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/915", "iss_label": "", "title": "Subject: Missing ui.py file in modules directory - preventing project execution", "body": "Hi,\n\nI'm trying to run the Deep-Live-Cam project, but I'm encountering a problem. The ui.py file is missing from the modules directory. I've tried the following:\n\n* Cloning the repository using git clone: `git clone https://github.com/hacksider/Deep-Live-Cam.git`\n* Cloning the repository using GitHub Desktop.\n* Downloading the repository as a ZIP file.\n\nIn all cases, the ui.py file is not present. I've also checked the repository on GitHub.com directly in my browser, and the file is missing there as well.\n\nThe modules directory contains the following files: [List the files you see].\n\nCould you please let me know how to obtain the ui.py file? Is it intentionally missing, or is there a separate download/generation step required?\n\nThanks for your help!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '513e41395687921d589fc10bbaf2f72ed579c84a', 'files': [{'path': 'modules/ui.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "4", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "a49d3fc6e5a228a6ac92e25831c507996fdc0042", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/697", "iss_label": "", "title": "[Solved] inswapper_128_fp16.onnx failed:Protobuf parsing failed", "body": "I have this error on macOS Apple Silicon.\r\n`Exception in Tkinter callback\r\nTraceback (most recent call last):\r\n File \"/opt/homebrew/Cellar/python@3.10/3.10.15/Frameworks/Python.framework/Versions/3.10/lib/python3.10/tkinter/__init__.py\", line 1921, in __call__\r\n return self.func(*args)\r\n File \"/Users//PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/customtkinter/windows/widgets/ctk_button.py\", line 554, in _clicked\r\n self._command()\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 242, in <lambda>\r\n command=lambda: webcam_preview(\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 649, in webcam_preview\r\n create_webcam_preview(\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/ui.py\", line 707, in create_webcam_preview\r\n temp_frame = frame_processor.process_frame(source_image, temp_frame)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 65, in process_frame\r\n temp_frame = swap_face(source_face, target_face, temp_frame)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 49, in swap_face\r\n return get_face_swapper().get(temp_frame, target_face, source_face, paste_back=True)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/modules/processors/frame/face_swapper.py\", line 44, in get_face_swapper\r\n FACE_SWAPPER = insightface.model_zoo.get_model(model_path, providers=modules.globals.execution_providers)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 96, in get_model\r\n model = router.get_model(providers=providers, provider_options=provider_options)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 40, in get_model\r\n session = PickableInferenceSession(self.onnx_file, **kwargs)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/insightface/model_zoo/model_zoo.py\", line 25, in __init__\r\n super().__init__(model_path, **kwargs)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 347, in __init__\r\n self._create_inference_session(providers, provider_options, disabled_optimizers)\r\n File \"/Users/PycharmProjects/Deep-Live-Cam/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py\", line 384, in _create_inference_session\r\n sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)\r\nonnxruntime.capi.onnxruntime_pybind11_state.InvalidProtobuf: [ONNXRuntimeError] : 7 : INVALID_PROTOBUF : Load model from /Users/PycharmProjects/Deep-Live-Cam/models/inswapper_128_fp16.onnx failed:Protobuf parsing failed.`\r\n\r\n\r\nThis https://github.com/hacksider/Deep-Live-Cam/issues/613 didn't help. \r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "hacksider", "pro": "deep-live-cam", "path": ["inswapper_128_fp16.onnx"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2\n+\n0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["inswapper_128_fp16.onnx"]}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "d4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/94", "iss_label": "", "title": "Can't find onnxruntime-silicon==1.13.1", "body": "Hi,\r\n\r\nCurrently on MacOS (Silicon, M2 Max), it seems not possible to download (with pip at least) the 1.13.1 version of onnxruntime.\r\n\r\n`ERROR: Could not find a version that satisfies the requirement onnxruntime-silicon==1.13.1 (from versions: 1.14.1, 1.15.0, 1.16.0, 1.16.3)\r\nERROR: No matching distribution found for onnxruntime-silicon==1.13.1`\r\n\r\nAnd, if I'm right, Deep-Live-Cam doesn't support more recent versions of onnxruntime, right ? So if that's the case, what could be a workaround ?\r\n\r\nThanks !", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd4c8adc5d3b0ef5cb13492d3fac83bb4c6835d33', 'files': [{'path': 'requirements.txt', 'Loc': {'(None, None, 16)': {'mod': [16]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "install require"}, "loctype": {"code": [], "doc": [], "test": [], "config": ["requirements.txt"], "asset": []}} +{"organization": "hacksider", "repo_name": "Deep-Live-Cam", "base_commit": "eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa", "iss_html_url": "https://github.com/hacksider/Deep-Live-Cam/issues/345", "iss_label": "", "title": "Program crashes when processing with DirectML", "body": "I am using an AMD RX 6600 XT GPU with the latest drivers and attempting to run the program with DirectML. The program's UI turns white and then crashes. It works fine with CPU execution but fails with DirectML.\r\nI already tried to reinstall onnxruntime-directml with no effect. Terminal:\r\n\r\n (myenv) E:\\Edesktop\\deep-live\\Deep-Live-Cam>python run.py --execution-provider dml\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\1k3d68.onnx landmark_3d_68 ['None', 3, 192, 192] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\2d106det.onnx landmark_2d_106 ['None', 3, 192, 192] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\det_10g.onnx detection [1, 3, '?', '?'] 127.5 128.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\genderage.onnx genderage ['None', 3, 96, 96] 0.0 1.0\r\nApplied providers: ['DmlExecutionProvider', 'CPUExecutionProvider'], with options: {'DmlExecutionProvider': {}, 'CPUExecutionProvider': {}}\r\nfind model: C:\\Users\\USER/.insightface\\models\\buffalo_l\\w600k_r50.onnx recognition ['None', 3, 112, 112] 127.5 127.5\r\nset det-size: (640, 640)\r\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 100/100 [00:01<00:00, 50.67it/s]\r\n[DLC.CORE] Creating temp resources...\r\n[DLC.CORE] Extracting frames...\r\n[DLC.FACE-SWAPPER] Progressing...\r\nProcessing: 0%| | 0/125 [00:00<?, ?frame/s, execution_providers=['DmlExecutionProvider'], execution_threads=8, max_memory=16\r\n(myenv) E:\\Edesktop\\deep-live\\Deep-Live-Cam>\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'eab5ba7027db1a4d0ec97883aa7a61b55fb81dfa', 'files': [{'path': 'modules/ui.py', 'Loc': {\"(None, 'create_root', 93)\": {'mod': [139, 140, 141]}}, 'status': 'modified'}, {'path': 'modules/core.py', 'Loc': {\"(None, 'parse_args', 47)\": {'mod': [67, 71]}, '(None, None, None)': {'mod': [11]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["modules/ui.py", "modules/core.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Textualize", "repo_name": "rich", "base_commit": "7e1928efee53da1ac7d156912df04aef83eefea5", "iss_html_url": "https://github.com/Textualize/rich/issues/1247", "iss_label": "Needs triage", "title": "[REQUEST] Extra caching for `get_character_cell_size`", "body": "**How would you improve Rich?**\r\n\r\nAdd a small `lru_cache` to https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L28 , similar to cache one layer down for https://github.com/willmcgugan/rich/blob/master/rich/cells.py#L46\r\n\r\nSize `4096` was plenty for what I describe below.\r\n\r\n**What problem does it solved for you?**\r\n\r\nI'm working on some optimizations for a TUI application here https://github.com/JoshKarpel/spiel/pull/37\r\n\r\nThis was my first idea on how to improve rendering time, based on https://github.com/benfred/py-spy telling me that a lot of time was being spent in `get_character_cell_size`, and this was my first thought for a solution.\r\n\r\nAdding the cache described above gives a ~30% speedup on the benchmarks I was using to work on that PR. In that application I'm repeatedly re-rendering the same content (in a `Live`), so adding a small cache to `get_character_cell_size` represents a significant speedup since the set of characters is usually the same from frame to frame. The benchmark is mostly printing colorized ASCII, with some unicode also drawn from a small set (box-drawing characters, block shapes, etc.). \r\n\r\nI guess that since there's lots of `Layout` and `Padding` going on, the most common character is probably space... perhaps the ASCII set that there's already a shortcut for could just be pre-computed and stored in a set? There's probably a lot of good ways to approach this :) ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7e1928efee53da1ac7d156912df04aef83eefea5', 'files': [{'path': 'rich/cells.py', 'Loc': {\"(None, 'get_character_cell_size', 28)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/cells.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Textualize", "repo_name": "rich", "base_commit": "5c9161d0c48254fb579827249a9ee7d88f4589b7", "iss_html_url": "https://github.com/Textualize/rich/issues/1489", "iss_label": "Needs triage", "title": "[REQUEST] current item of a progress", "body": "when creating progress bars for logical items (that are then supported with additional progress pars,\r\ni would consider it helpful if it was possible to add a name/render able for the current item, and to push those in updates\r\n\r\ni`m not yet sure how this is best expressed/implemented", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5c9161d0c48254fb579827249a9ee7d88f4589b7', 'files': [{'path': 'rich/progress.py', 'Loc': {\"('Progress', 'update', 739)\": {'mod': []}}, 'status': 'modified'}, {'path': 'rich/progress.py', 'Loc': {\"('Task', None, 437)\": {'mod': [466]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/progress.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Textualize", "repo_name": "rich", "base_commit": "0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80", "iss_html_url": "https://github.com/Textualize/rich/issues/2457", "iss_label": "bug", "title": "[BUG] Console(no_color=True) does not work on Windows 10", "body": "You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).\r\n\r\n**Describe the bug**\r\n\r\nThe \"no_color=True\" Console parameter does not seem to do anything on Windows 10. I tested on both Cmder and native cmd.exe terminals and got the same results. See screenshots below.\r\n\r\nCmder:\r\n![richbug01](https://user-images.githubusercontent.com/7690118/183566141-724f7390-f9f9-4063-bf31-b0144e391975.PNG)\r\n\r\ncmd.exe\r\n![richbug02](https://user-images.githubusercontent.com/7690118/183566181-5ef45bf6-366c-4c69-b6f8-6ad25d5aff41.PNG)\r\n\r\nfor reference, this is what it looks like from my Ubuntu laptop:\r\n\r\n![richbug-linux-ok](https://user-images.githubusercontent.com/7690118/183566308-62bbd545-1c90-4345-bd3c-a228ea0f5f35.png)\r\n\r\nAlso happy to help fix this if you can point me in the right direction. Thank you!\r\n\r\n**Platform**\r\n<details>\r\n<summary>Click to expand</summary>\r\n\r\nOS: Windows 10\r\n\r\n**Cmder:**\r\n\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 A high level console interface. \u2502\r\n\u2502 \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502 <console width=155 ColorSystem.WINDOWS> \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502 \u2502\r\n\u2502 color_system = 'windows' \u2502\r\n\u2502 encoding = 'utf-8' \u2502\r\n\u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502\r\n\u2502 height = 83 \u2502\r\n\u2502 is_alt_screen = False \u2502\r\n\u2502 is_dumb_terminal = False \u2502\r\n\u2502 is_interactive = True \u2502\r\n\u2502 is_jupyter = False \u2502\r\n\u2502 is_terminal = True \u2502\r\n\u2502 legacy_windows = True \u2502\r\n\u2502 no_color = False \u2502\r\n\u2502 options = ConsoleOptions( \u2502\r\n\u2502 size=ConsoleDimensions(width=155, height=83), \u2502\r\n\u2502 legacy_windows=True, \u2502\r\n\u2502 min_width=1, \u2502\r\n\u2502 max_width=155, \u2502\r\n\u2502 is_terminal=True, \u2502\r\n\u2502 encoding='utf-8', \u2502\r\n\u2502 max_height=83, \u2502\r\n\u2502 justify=None, \u2502\r\n\u2502 overflow=None, \u2502\r\n\u2502 no_wrap=False, \u2502\r\n\u2502 highlight=None, \u2502\r\n\u2502 markup=None, \u2502\r\n\u2502 height=None \u2502\r\n\u2502 ) \u2502\r\n\u2502 quiet = False \u2502\r\n\u2502 record = False \u2502\r\n\u2502 safe_box = True \u2502\r\n\u2502 size = ConsoleDimensions(width=155, height=83) \u2502\r\n\u2502 soft_wrap = False \u2502\r\n\u2502 stderr = False \u2502\r\n\u2502 style = None \u2502\r\n\u2502 tab_size = 8 \u2502\r\n\u2502 width = 155 \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\u250c\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u2510\r\n\u2502 Windows features available. \u2502\r\n\u2502 \u2502\r\n\u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502\r\n\u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502\r\n\u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502\r\n\u2502 \u2502\r\n\u2502 truecolor = False \u2502\r\n\u2502 vt = False \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510\r\n\u2502 { \u2502\r\n\u2502 'TERM': 'cygwin', \u2502\r\n\u2502 'COLORTERM': None, \u2502\r\n\u2502 'CLICOLOR': None, \u2502\r\n\u2502 'NO_COLOR': None, \u2502\r\n\u2502 'TERM_PROGRAM': None, \u2502\r\n\u2502 'COLUMNS': '157', \u2502\r\n\u2502 'LINES': '83', \u2502\r\n\u2502 'JUPYTER_COLUMNS': None, \u2502\r\n\u2502 'JUPYTER_LINES': None, \u2502\r\n\u2502 'JPY_PARENT_PID': None, \u2502\r\n\u2502 'VSCODE_VERBOSE_LOGGING': None \u2502\r\n\u2502 } \u2502\r\n\u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518\r\nplatform=\"Windows\"\r\n\r\n\r\n**cmd.exe**\r\n\r\n\u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500 <class 'rich.console.Console'> \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 A high level console interface. \u2502 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 <console width=119 ColorSystem.WINDOWS> \u2502 \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2502 \u2502 color_system = 'windows' \u2502 \u2502 encoding = 'utf-8' \u2502 \u2502 file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> \u2502 \u2502 height = 30 \u2502 \u2502 is_alt_screen = False \u2502 \u2502 is_dumb_terminal = False \u2502 \u2502 is_interactive = True \u2502 \u2502 is_jupyter = False \u2502 \u2502 is_terminal = True \u2502 \u2502 legacy_windows = True \u2502 \u2502 no_color = False \u2502 \u2502 options = ConsoleOptions( \u2502 \u2502 size=ConsoleDimensions(width=119, height=30), \u2502 \u2502 legacy_windows=True, \u2502 \u2502 min_width=1, \u2502 \u2502 max_width=119, \u2502 \u2502 is_terminal=True, \u2502 \u2502 encoding='utf-8', \u2502 \u2502 max_height=30, \u2502 \u2502 justify=None, \u2502 \u2502 overflow=None, \u2502 \u2502 no_wrap=False, \u2502 \u2502 highlight=None, \u2502 \u2502 markup=None, \u2502 \u2502 height=None \u2502 \u2502 ) \u2502 \u2502 quiet = False \u2502 \u2502 record = False \u2502 \u2502 safe_box = True \u2502 \u2502 size = ConsoleDimensions(width=119, height=30) \u2502 \u2502 soft_wrap = False \u2502 \u2502 stderr = False \u2502 \u2502 style = None \u2502 \u2502 tab_size = 8 \u2502 \u2502 width = 119 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u250c\u2500\u2500\u2500 <class 'rich._windows.WindowsConsoleFeatures'> \u2500\u2500\u2500\u2500\u2510 \u2502 Windows features available. \u2502 \u2502 \u2502 \u2502 \u250c\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 \u2502 \u2502 WindowsConsoleFeatures(vt=False, truecolor=False) \u2502 \u2502 \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u2502 \u2502 \u2502 \u2502 truecolor = False \u2502 \u2502 vt = False \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 \u250c\u2500\u2500\u2500\u2500\u2500\u2500 Environment Variables \u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2510 \u2502 { \u2502 \u2502 'TERM': None, \u2502 \u2502 'COLORTERM': None, \u2502 \u2502 'CLICOLOR': None, \u2502 \u2502 'NO_COLOR': None, \u2502 \u2502 'TERM_PROGRAM': None, \u2502 \u2502 'COLUMNS': None, \u2502 \u2502 'LINES': None, \u2502 \u2502 'JUPYTER_COLUMNS': None, \u2502 \u2502 'JUPYTER_LINES': None, \u2502 \u2502 'JPY_PARENT_PID': None, \u2502 \u2502 'VSCODE_VERBOSE_LOGGING': None \u2502 \u2502 } \u2502 \u2514\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2500\u2518 platform=\"Windows\" \r\n\r\nrich==12.5.1\r\n\r\n</details>\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0aa85606ad9a7ca6b28a5ae376e433b8e59f6e80', 'files': [{'path': 'rich/console.py', 'Loc': {\"('Console', None, 583)\": {'mod': [612]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["rich/console.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "427cc215310804127b55744fcc3664ede38a4a0d", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/21363", "iss_label": "question", "title": "How does youtube-dl detect advertisements?", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions\r\n- Search the bugtracker for similar questions: http://yt-dl.org/search-issues\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [x] I'm asking a question\r\n- [x] I've looked through the README and FAQ for similar questions\r\n- [x] I've searched the bugtracker for similar questions including closed ones\r\n\r\n\r\n## Question\r\n\r\n<!--\r\nAsk your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.\r\n-->\r\n\r\nFox Sports Go recently changed their streaming service. Previously, I used to be able to record live streams and download event replays by passing headers into streamlink. However, recording live with streamlink \"works\" just fine, but because commercials have some kind of different codec than the actual content, I can't do anything with the resulting .ts file.\r\n\r\nHowever, I can download replays from FOX.com just fine, using a youtube-dl command like this: `youtube-dl --hls-prefer-native -f 3750 https://content-auso1.uplynk.com/preplay2/6f324d0648b34576b36ce49160add428/391dec8c1a9a07b70d3062e4bf1a6e3c/4sQNPrWNbJWMzPMP2RXiNy2SFAhlIDUYbUwS2TJwN94h.m3u8?pbs=38dc148aad7c4a7f981a8dd57493a625`\r\n\r\nThe big problems with this are that a) I have to wait until a replay is posted; and b) FOX is very inconsistent as to which events get replays posted and which do not, meaning I'm SOL if I'm trying to save an event that just doesn't have a replay for some reason. If I could record live, this wouldn't be an issue, but again, the commercials are throwing things off.\r\n\r\nOne of the output lines from youtube-dl is `[hlsnative] Total fragments: 1815 (not including 504 ad)`.\r\n\r\nSo my question is: how does youtube-dl detect which segments are ads in the .m3u8 file? If I can figure that out, perhaps I can rig streamlink to ignore those segments when recording, saving me a lot of trouble.\r\n\r\nThanks!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '427cc215310804127b55744fcc3664ede38a4a0d', 'files': [{'path': 'youtube_dl/downloader/hls.py', 'Loc': {\"('HlsFD', 'is_ad_fragment_start', 78)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "5", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/downloader/hls.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "8b7340a45eb0e3aeaa996896ff8690b6c3a32af6", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/15955", "iss_label": "", "title": "use youtube-dl with cookies file in code not from command line ", "body": "## Please follow the guide below\r\n\r\n- You will be asked some questions and requested to provide some information, please read them **carefully** and answer honestly\r\n- Put an `x` into all the boxes [ ] relevant to your *issue* (like this: `[x]`)\r\n- Use the *Preview* tab to see what your issue will actually look like\r\n\r\n---\r\n\r\n### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.03.20*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [ ] I've **verified** and **I assure** that I'm running youtube-dl **2018.03.20**\r\n\r\n### Before submitting an *issue* make sure you have:\r\n- [ ] At least skimmed through the [README](https://github.com/rg3/youtube-dl/blob/master/README.md), **most notably** the [FAQ](https://github.com/rg3/youtube-dl#faq) and [BUGS](https://github.com/rg3/youtube-dl#bugs) sections\r\n- [ ] [Searched](https://github.com/rg3/youtube-dl/search?type=Issues) the bugtracker for similar issues including closed ones\r\n- [ ] Checked that provided video/audio/playlist URLs (if any) are alive and playable in a browser\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [X ] Question\r\n- [ ] Other\r\n\r\n---\r\n\r\n### The following sections concretize particular purposed issues, you can erase any section (the contents between triple ---) not applicable to your *issue*\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *bug report*, *site support request* or you are not completely sure provide the full verbose output as follows:\r\n\r\nAdd the `-v` flag to **your command line** you run youtube-dl with (`youtube-dl -v <your command line>`), copy the **whole** output and insert it here. It should look similar to one below (replace it with **your** log inserted between triple ```):\r\n\r\n```\r\n[debug] System config: []\r\n[debug] User config: []\r\n[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']\r\n[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251\r\n[debug] youtube-dl version 2018.03.20\r\n[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2\r\n[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4\r\n[debug] Proxy map: {}\r\n...\r\n<end of log>\r\n```\r\n\r\n---\r\n\r\n### If the purpose of this *issue* is a *site support request* please provide all kinds of example URLs support for which should be included (replace following example URLs by **yours**):\r\n- Single video: https://www.youtube.com/watch?v=BaW_jenozKc\r\n- Single video: https://youtu.be/BaW_jenozKc\r\n- Playlist: https://www.youtube.com/playlist?list=PL4lCao7KL_QFVb7Iudeipvc2BCavECqzc\r\n\r\nNote that **youtube-dl does not support sites dedicated to [copyright infringement](https://github.com/rg3/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free)**. In order for site support request to be accepted all provided example URLs should not violate any copyrights.\r\n\r\n---\r\n\r\n### Description of your *issue*, suggested solution and other information\r\n\r\nExplanation of your *issue* in arbitrary form goes here. Please make sure the [description is worded well enough to be understood](https://github.com/rg3/youtube-dl#is-the-description-of-the-issue-itself-sufficient). Provide as much context and examples as possible.\r\nIf work on your *issue* requires account credentials please provide them or explain how one can obtain them.\r\n\r\n\r\n\r\n\r\n\r\n```\r\nfrom __future__ import unicode_literals\r\nimport youtube_dl\r\n\r\nydl_opts = {}\r\nwith youtube_dl.YoutubeDL(ydl_opts) as ydl:\r\n ydl.download(['https://www.youtube.com/watch?v=BaW_jenozKc'])\r\n```\r\nthis for downoalding simple youtube video i need how add the cookies file untill i can downoald from my account on linda im trying to create small downaolder untill help fast the process any idea how add cookies file", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8b7340a45eb0e3aeaa996896ff8690b6c3a32af6', 'files': [{'path': 'youtube_dl/YoutubeDL.py', 'Loc': {\"('YoutubeDL', None, 113)\": {'mod': [208]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/YoutubeDL.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "267d81962a0709f15f82f96b7aadbb5473a06992", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16870", "iss_label": "", "title": "[bilibili]how can i download video on page2?", "body": "### Make sure you are using the *latest* version: run `youtube-dl --version` and ensure your version is *2018.06.25*. If it's not, read [this FAQ entry](https://github.com/rg3/youtube-dl/blob/master/README.md#how-do-i-update-youtube-dl) and update. Issues with outdated version will be rejected.\r\n- [x] I've **verified** and **I assure** that I'm running youtube-dl **2018.06.25**\r\n\r\n### What is the purpose of your *issue*?\r\n- [ ] Bug report (encountered problems with youtube-dl)\r\n- [ ] Site support request (request for adding support for a new site)\r\n- [ ] Feature request (request for a new functionality)\r\n- [x] Question\r\n- [ ] Other\r\n\r\nI try to use youtube-dl to download a video on bilibili like https://www.bilibili.com/video/av18178195\r\n\r\nThe video have 2 pages, but when i type **youtube-dl -f 1 https://www.bilibili.com/video/av18178195**\r\ni just get the video on page1, how can i get video on page2?\r\nI have see this page https://github.com/rg3/youtube-dl/pull/16354\r\nbut i use \r\n**youtube-dl -f 1 https://www.bilibili.com/video/av18178195/index_2.html** or \r\n**youtube-dl -f 1 https://www.bilibili.com/video/av18178195/?p=2**\r\n\r\nIt will get the same video on page1\r\nHow can i solve this problem? Thank you.\r\nIs this problem fixed? I use the standalone exe version.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '267d81962a0709f15f82f96b7aadbb5473a06992', 'files': [{'path': 'youtube_dl/extractor/bilibili.py', 'Loc': {\"('BiliBiliIE', None, 25)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/bilibili.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/16883", "iss_label": "", "title": "[Feature request] Network retry, with configurability", "body": "I just ran some large youtube-dl scripts, and noticed a few videos were missing finally.\r\n\r\nThis was probably due to intermittent network downtimes, and apparently youtube-dl doesn't do any network retry at all (I may be wrong).\r\n\r\nThus, I suggest adding an option named for example `--network-retry`, related to `--socket-timeout`. The default would be 0 to keep the current youtube-dl behavior, and I could configure it to something like 5.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'eca1f0d115e6a2712ff0d5f6b25e3ded5e52db71', 'files': [{'path': 'youtube_dl/options.py', 'Loc': {\"(None, 'parseOpts', 41)\": {'mod': [458, 462]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "4", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/options.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "5014bd67c22b421207b2650d4dc874b95b36dda1", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/30539", "iss_label": "question", "title": "velocidad de descarga limitada", "body": "<!--\r\n\r\n######################################################################\r\n WARNING!\r\n IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE\r\n######################################################################\r\n\r\n-->\r\n\r\n\r\n## Checklist\r\n\r\n<!--\r\nCarefully read and work through this check list in order to prevent the most common mistakes and misuse of youtube-dl:\r\n- Look through the README (http://yt-dl.org/readme) and FAQ (http://yt-dl.org/faq) for similar questions\r\n- Search the bugtracker for similar questions: http://yt-dl.org/search-issues\r\n- Finally, put x into all relevant boxes (like this [x])\r\n-->\r\n\r\n- [ yes] I'm asking a question\r\n- [ yes] I've looked through the README and FAQ for similar questions\r\n- [yes ] I've searched the bugtracker for similar questions including closed ones\r\n\r\n\r\n## Question\r\n\r\n<!--\r\nAsk your question in an arbitrary form. Please make sure it's worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient.\r\n-->\r\n\r\nWRITE QUESTION HERE\r\n\r\nhola .. hace unos dias estoy experimento una baja en la velocidad de descarga desde la pagina de youtube usando youtube-dl .. lo pueden resolver? probe bajando videos desde otros sitios webs y descarga a toda velocidad .. solo me pasa desde la pagina de youtube .. para mi hicieron algun cambio en su plataforma ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '5014bd67c22b421207b2650d4dc874b95b36dda1', 'files': [{'path': 'youtube_dl/extractor/youtube.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/extractor/youtube.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "ytdl-org", "repo_name": "youtube-dl", "base_commit": "e90d175436e61e207e0b0cae7f699494dcf15922", "iss_html_url": "https://github.com/ytdl-org/youtube-dl/issues/9104", "iss_label": "", "title": "Chinese title was missing !", "body": "```\nroot@kangland:/var/www/ydy# youtube-dl -v w0dMz8RBG7g\n[debug] System config: []\n[debug] User config: []\n[debug] Command-line args: [u'-v', u'w0dMz8RBG7g']\n[debug] Encodings: locale ANSI_X3.4-1968, fs ANSI_X3.4-1968, out ANSI_X3.4-1968, pref ANSI_X3.4-1968\n[debug] youtube-dl version 2016.04.01\n[debug] Python version 2.7.6 - Linux-2.6.32-042stab113.11-i686-with-Ubuntu-14.04-trusty\n[debug] exe versions: none\n[debug] Proxy map: {}\n[youtube] w0dMz8RBG7g: Downloading webpage\n[youtube] w0dMz8RBG7g: Downloading video info webpage\n[youtube] w0dMz8RBG7g: Extracting video information\n[youtube] {22} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] w0dMz8RBG7g: Downloading player https://s.ytimg.com/yts/jsbin/player-en_US-vfli5QvRo/base.js\n[youtube] {43} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {18} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {5} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {36} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {17} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {136} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {247} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {135} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {244} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {134} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {243} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {133} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {242} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {160} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {278} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {140} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {171} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {249} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {250} signature length 41.43, html5 player en_US-vfli5QvRo\n[youtube] {251} signature length 41.43, html5 player en_US-vfli5QvRo\n[debug] Invoking downloader on u'https://r2---sn-a8au-vgqe.googlevideo.com/videoplayback?ms=au&mt=1460039622&pl=40&mv=m&key=yt6&pte=yes&mm=31&mn=sn-a8au-vgqe&sver=3&fexp=9407059%2C9416126%2C9416891%2C9420452%2C9422596%2C9423662%2C9426926%2C9427902%2C9428398%2C9432364&ratebypass=yes&ipbits=0&initcwndbps=26957500&expire=1460061513&upn=NhCteH8M5OA&mime=video%2Fmp4&axtags=tx%3D9417362&id=o-AEE-ylzEiNeRWF2HIs5_rsDGUftXqgxkV7V0eUSq7oZ4&dur=214.111&source=youtube&ip=2602%3Aff62%3A104%3Ae6%3A%3A&sparams=axtags%2Cdur%2Cid%2Cinitcwndbps%2Cip%2Cipbits%2Citag%2Clmt%2Cmime%2Cmm%2Cmn%2Cms%2Cmv%2Cpl%2Cpte%2Cratebypass%2Crequiressl%2Csource%2Cupn%2Cexpire&requiressl=yes&lmt=1458219184364643&itag=22&signature=B1E1AF27412C916392FF49F1D60F0771145BE274.DA5587721204D947940DB57A584188E732C36433'\n[download] Destination: Wanting - (You Exist In My Song) [Trad. Chinese] [Official Music Video]-w0dMz8RBG7g.mp4\n[download] 100% of 32.20MiB in 00:00\n\n```\n\n```\nroot@kangland:/var/www/ydy# locale\nLANG=\nLANGUAGE=\nLC_CTYPE=\"POSIX\"\nLC_NUMERIC=\"POSIX\"\nLC_TIME=\"POSIX\"\nLC_COLLATE=\"POSIX\"\nLC_MONETARY=\"POSIX\"\nLC_MESSAGES=\"POSIX\"\nLC_PAPER=\"POSIX\"\nLC_NAME=\"POSIX\"\nLC_ADDRESS=\"POSIX\"\nLC_TELEPHONE=\"POSIX\"\nLC_MEASUREMENT=\"POSIX\"\nLC_IDENTIFICATION=\"POSIX\"\nLC_ALL=\n```\n\n```\nroot@kangland:/var/www/ydy# locale -a\nC\nC.UTF-8\nPOSIX\nzh_CN.utf8\nzh_HK.utf8\nzh_TW.utf8\n```\n\n**Run :** `youtube-dl -f 'best[height=360]' --restrict-filenames -i -o '%(playlist)s/%(playlist_index)s - %(title)s.%(ext)s' PL1OKxDwI_y_AO1Lb-zO57wYdpWqhk7MUs`\n\n**Result :** [download] _/01 - _.mp4\n\nHow to fix chinese title ? \n\nThank you so much !\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e90d175436e61e207e0b0cae7f699494dcf15922', 'files': [{'path': 'youtube_dl/options.py', 'Loc': {\"(None, 'parseOpts', 22)\": {'mod': [447]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["youtube_dl/options.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "localstack", "repo_name": "localstack", "base_commit": "3794f1e20a56f3b7bcd23f82a006e266f2a57a05", "iss_html_url": "https://github.com/localstack/localstack/issues/2511", "iss_label": "type: usage", "title": "Cannot connect to DynamoDB from lambda", "body": "<!-- Love localstack? Please consider supporting our collective:\r\n\ud83d\udc49 https://opencollective.com/localstack/donate -->\r\n\r\n# Type of request: This is a ...\r\n\r\n- [x] bug report\r\n- [ ] feature request\r\n\r\n# Detailed description\r\nI'm using localstack for local development. I have a DynamoDB table named `readings` and I'd like \r\nto insert items from a lambda function.\r\nI have a simple function in python runtime:\r\n\r\n```python\r\nimport os\r\nimport boto3\r\n\r\ndef lambda_handler(events, context):\r\n DYNAMODB_ENDPOINT_URL = os.environ.get(\"DYNAMODB_ENDPOINT_URL\")\r\n dynamodb = boto3.resource(\"dynamodb\", endpoint_url=DYNAMODB_ENDPOINT_URL)\r\n readings_table = dynamodb.Table(DYNAMODB_READINGS_TABLE_NAME)\r\n\r\n readings_table.put_item(Item={\"reading_id\": \"10\", \"other\": \"test\"})\r\n```\r\n\r\nI cannot figure out what is the proper endpoint url for my local DynamoDB.\r\nI have tried different combinations of `localhost`, `localstack` and ports `4566`, `4569`, each time I get error `EndpointConnectionError`\r\n\r\n## Expected behavior\r\nItems are inserted in the table.\r\n\r\n## Actual behavior\r\nLambda cannot connect to dynamodb and error `[ERROR] EndpointConnectionError: Could not connect to the endpoint URL: \"http://localstack:4569/\"` is raised.\r\n\r\n# Steps to reproduce\r\n\r\nRun localstack image with docker-compose, set `LOCALSTACK_HOSTNAME=localstack` and try to access dynamodb resource from lambda.\r\n\r\n## Command used to start LocalStack\r\ndocker-compose service I'm using:\r\n```yml\r\n localstack:\r\n image: localstack/localstack:0.11.2\r\n ports:\r\n - 4566:4566\r\n - 8080:8080\r\n environment:\r\n SERVICES: \"dynamodb,sqs,lambda,iam\"\r\n DATA_DIR: \"/tmp/localstack/data\"\r\n PORT_WEB_UI: \"8080\"\r\n LOCALSTACK_HOSTNAME: localstack\r\n LAMBDA_EXECUTOR: docker\r\n AWS_ACCESS_KEY_ID: \"test\"\r\n AWS_SECRET_ACCESS_KEY: \"test\"\r\n AWS_DEFAULT_REGION: \"us-east-1\"\r\n volumes:\r\n - localstack_volume:/tmp/localstack/data\r\n - /var/run/docker.sock:/var/run/docker.sock\r\n # When a container is started for the first time, it will execute files with extensions .sh that are found in /docker-entrypoint-initaws.d. \r\n # Files will be executed in alphabetical order. You can easily create aws resources on localstack using `awslocal` (or `aws`) cli tool in the initialization scripts.\r\n # Here I run creating dynamodb tables, roles, etc.\r\n - ./localstack-startup-scripts/:/docker-entrypoint-initaws.d/\r\n```\r\n\r\n## Client code (AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\n...\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [19], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "localstack", "repo_name": "localstack", "base_commit": "1c5f2e9650155a839cc842a9cd07faf3e76ed5d2", "iss_html_url": "https://github.com/localstack/localstack/issues/1078", "iss_label": "", "title": "Connect to localhost:4568 [localhost/127.0.0.1] failed: Connection refused (Connection refused)", "body": "Hi there, I am having trouble connecting to Kinesis on localstack. Everything runs fine when I run it locally, the error happens inside of our Jenkins pipeline.\r\n\r\nHere is the Dockerfile I am using:\r\n```\r\nFROM hseeberger/scala-sbt:8u181_2.12.7_1.2.6\r\n\r\nUSER root\r\nRUN apt-get update\r\nRUN apt-get -y install curl\r\nRUN curl -sL https://deb.nodesource.com/setup_8.x | bash -\r\nRUN apt-get -y install nodejs\r\nRUN apt-get install npm\r\nRUN curl -L \"https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)\" -o /usr/local/bin/docker-compose\r\nRUN chmod +x /usr/local/bin/docker-compose\r\n```\r\n\r\nAnd here is my docker-compose.yml:\r\n```\r\nversion: '3.6'\r\n\r\nservices:\r\n # AWS services in docker env\r\n localstack:\r\n image: localstack/localstack:latest\r\n environment:\r\n - SERVICES=kinesis,dynamodb,s3,cloudwatch\r\n - HOSTNAME_EXTERNAL=localstack\r\n - DATA_DIR=/tmp/localstack/data\r\n volumes:\r\n - \"/tmp:/tmp\"\r\n ports:\r\n - \"4568:4568\"\r\n - \"4569:4569\"\r\n - \"4572:4572\"\r\n - \"4582:4582\"\r\n\r\n postgres:\r\n image: \"postgres:9.6\"\r\n restart: always\r\n ports:\r\n - \"5432:5432\"\r\n environment:\r\n POSTGRES_USER: dev\r\n POSTGRES_PASSWORD: *******\r\n POSTGRES_DB: table\r\n PGPASSWORD: *******\r\n volumes:\r\n - ./docker/postgres-init:/docker-entrypoint-initdb.d\r\n\r\n mocks:\r\n image: \"jordimartin/mmock\"\r\n volumes:\r\n - \"./docker/mocks:/config\"\r\n ports:\r\n - \"8082:8082\"\r\n - \"8083:8083\"\r\n - \"8084:8084\"\r\n\r\n aws-create-stream:\r\n image: \"ivonet/aws-cli:1.0.0\"\r\n links:\r\n - localstack\r\n volumes:\r\n - ${HOME}/.aws:/root/.aws:ro\r\n command: --endpoint-url=http://localstack:4568 kinesis create-stream --stream-name RawScanPipe --shard-count 1\r\n environment:\r\n - AWS_DEFAULT_REGION=us-east-1\r\n\r\n #PGAdmin gives a nice gui on the PostgreSQL DB\r\n pgadmin:\r\n image: dpage/pgadmin4\r\n links:\r\n - postgres\r\n depends_on:\r\n - postgres\r\n ports:\r\n - \"8888:80\"\r\n volumes:\r\n - ./docker/pgadmin:/var/lib/pgadmin\r\n environment:\r\n PGADMIN_DEFAULT_EMAIL: *********\r\n PGADMIN_DEFAULT_PASSWORD: *********\r\n```\r\n\r\nIn case it matters, here is the segment in our Jenkins file where this gets called:\r\n```\r\ndef sbtInside() {\r\n return \"-u root -v /usr/bin/docker:/usr/bin/docker \" +\r\n \"-v /usr/local/bin/aws:/usr/local/bin/aws \" +\r\n \"-v /var/run/docker.sock:/var/run/docker.sock \" +\r\n \"-v /usr/lib/x86_64-linux-gnu/libltdl.so.7:/usr/lib/libltdl.so.7 \" +\r\n \"-v $HOME/.ivy2:/root/.ivy2 \" +\r\n \"-v $HOME/.sbt:/root/.sbt\"\r\n}\r\n\r\n stage(\"Unit/Functional Tests & Create Dockerfile\") {\r\n app.inside(sbtInside()) {\r\n try {\r\n echo \"Starting unit tests...\"\r\n sh \"TARGET=LOCAL sbt clean test\"\r\n\r\n echo \"Starting up test stack...\"\r\n sh \"docker-compose -f docker-compose.yml up -d\"\r\n\r\n echo \"Starting functional tests...\"\r\n sh \"TARGET=LOCAL \" +\r\n \"PRODUCT_ENABLED=true \" +\r\n \"sbt clean functional/test\"\r\n } finally {\r\n echo \"Tests complete!\"\r\n sh \"docker-compose -f docker-compose.yml down -v\"\r\n sh \"sbt docker\"\r\n }\r\n }\r\n }\r\n```\r\n\r\nI am sure I am missing something simple, I just can't figure out what it is!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1c5f2e9650155a839cc842a9cd07faf3e76ed5d2', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config\nCode"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}} +{"organization": "localstack", "repo_name": "localstack", "base_commit": "1c5f2e9650155a839cc842a9cd07faf3e76ed5d2", "iss_html_url": "https://github.com/localstack/localstack/issues/1095", "iss_label": "", "title": "Healthcheck when running in docker", "body": "I'm running localstack with docker-compose as a dependency for a service that I'm developing. The problem is that my service calls localstack before it's fully initialized. The only solution I could find so far is a hard `sleep <seconds>` at start-up, but that only works on my specific system and produces unexpected results for other developers. Can localstack expose a healthcheck, so I can have docker-compose start my service after localstack is \"healthy\"?\r\n\r\nA trimmed down version of my docker-compose.yml looks something like this:\r\n```yaml\r\nmyservice:\r\n command: \"sh -c 'sleep 10 && npm run start'\" #grrrrr\r\n depends_on:\r\n - aws\r\n # aws:\r\n # condition: service_healthy\r\naws:\r\n image: localstack/localstack\r\n environment:\r\n SERVICES: s3:81,sqs:82,ses:83\r\n HOSTNAME_EXTERNAL: aws\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1c5f2e9650155a839cc842a9cd07faf3e76ed5d2', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}} +{"organization": "localstack", "repo_name": "localstack", "base_commit": "5d11af78ae1d19560f696a9e1abb707bd115c390", "iss_html_url": "https://github.com/localstack/localstack/issues/4970", "iss_label": "type: bug\nstatus: triage needed\narea: configuration\naws:cloudformation\narea: networking", "title": "Lambda invocation exception", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nCreating and/or updating Lambda functions in docker does not work after updating LocalStack image to the latest version with the following error in LocalStack logs:\r\n```\r\n2021-11-20T03:33:32.357:DEBUG:localstack.services.awslambda.lambda_executors: Lambda arn:aws:lambda:us-east-2:000000000000:function:lambda-socket-local-custom-resource-apigw-cw-role result / log output:\r\n\r\n> standard_init_linux.go:228: exec user process caused: exec format error\r\n2021-11-20T03:33:32.814:INFO:localstack.services.awslambda.lambda_api: Error executing Lambda function arn:aws:lambda:us-east-2:000000000000:function:lambda-socket-local-custom-resource-apigw-cw-role: Lambda process returned with error. Result: . Output:\r\nstandard_init_linux.go:228: exec user process caused: exec format error Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 608, in run_lambda_executor\r\n result, log_output = self.execute_in_container(\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/awslambda/lambda_launcher.py.enc\", line 272, in docker_separate_execute_in_container\r\n File \"/opt/code/localstack/localstack/utils/docker_utils.py\", line 1335, in start_container\r\n raise ContainerException(\r\nlocalstack.utils.docker_utils.ContainerException: Docker container returned with exit code 1\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_api.py\", line 809, in run_lambda\r\n result = LAMBDA_EXECUTOR.execute(\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 441, in execute\r\n return do_execute()\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 431, in do_execute\r\n return _run(func_arn=func_arn)\r\n File \"/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py\", line 158, in wrapped\r\n raise e\r\n File \"/opt/code/localstack/localstack/utils/cloudwatch/cloudwatch_util.py\", line 154, in wrapped\r\n result = func(*args, **kwargs)\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 418, in _run\r\n raise e\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 414, in _run\r\n result = self._execute(lambda_function, inv_context)\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 726, in _execute\r\n result = self.run_lambda_executor(lambda_function=lambda_function, inv_context=inv_context)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/awslambda/lambda_extended.py.enc\", line 548, in run_lambda_executor\r\n File \"/opt/code/localstack/localstack/services/awslambda/lambda_executors.py\", line 649, in run_lambda_executor\r\n raise InvocationException(\r\nlocalstack.services.awslambda.lambda_executors.InvocationException: Lambda process returned with error. Result: . Output:\r\nstandard_init_linux.go:228: exec user process caused: exec format error\r\n\r\n2021-11-20T03:33:55.187:INFO:localstack_ext.services.cloudformation.service_models: Unable to fetch CF custom resource result from s3://localstack-cf-custom-resources-results/62c433d4 . Existing keys: []\r\n2021-11-20T03:33:55.189:DEBUG:localstack.utils.cloudformation.template_deployer: Error applying changes for CloudFormation stack \"lambda-socket-local\": An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist. Traceback (most recent call last):\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1482, in _run\r\n self.do_apply_changes_in_loop(changes, stack, stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1554, in do_apply_changes_in_loop\r\n self.apply_change(change, stack, new_resources, stack_name=stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 1619, in apply_change\r\n result = deploy_resource(resource_id, new_resources, stack_name)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 778, in deploy_resource\r\n result = execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE)\r\n File \"/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py\", line 829, in execute_resource_action\r\n result = func[\"function\"](resource_id, resources, resource_type, func, stack_name)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/models/custom.py\", line 61, in create_custom_resource\r\n result=retry(fetch_result,retries=KIGak(CUSTOM_RESOURCES_RESULT_POLL_TIMEOUT/2),sleep=2)\r\n File \"/opt/code/localstack/localstack/utils/common.py\", line 812, in retry\r\n raise raise_error\r\n File \"/opt/code/localstack/localstack/utils/common.py\", line 808, in retry\r\n return function(**kwargs)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/services/cloudformation/models/custom.py\", line 58, in fetch_result\r\n return aws_utils.download_s3_object(CUSTOM_RESOURCES_RESULTS_BUCKET,result_key)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/localstack_ext/utils/aws/aws_utils.py.enc\", line 31, in download_s3_object\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 391, in _api_call\r\n return self._make_api_call(operation_name, kwargs)\r\n File \"/opt/code/localstack/.venv/lib/python3.8/site-packages/botocore/client.py\", line 719, in _make_api_call\r\n raise error_class(parsed_response, operation_name)\r\nbotocore.errorfactory.NoSuchKey: An error occurred (NoSuchKey) when calling the GetObject operation: The specified key does not exist.\r\n```\r\n\r\n### Expected Behavior\r\n\r\nLambda create and/or update operations should pass successfully all the way to the end without any errors.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith a docker-compose file\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n```yml\r\nservices:\r\n localstack:\r\n container_name: localstack\r\n image: localstack/localstack\r\n ports:\r\n - 443:443\r\n - 4510-4530:4510-4530\r\n - 4566:4566\r\n - 4571:4571\r\n environment:\r\n - LOCALSTACK_API_KEY=${LOCALSTACK_LICENSE}\r\n - USE_LIGHT_IMAGE=1\r\n - IMAGE_NAME=localstack/localstack\r\n - MAIN_CONTAINER_NAME=localstack\r\n - SERVICES=cloudformation,cloudfront,apigateway,apigatewayv2,iam,secretsmanager,lambda,s3,sqs,sts,ec2,kafka,elb,elbv2\r\n - DEFAULT_REGION=us-east-1\r\n - AWS_ACCESS_KEY_ID=test\r\n - AWS_SECRET_ACCESS_KEY=test\r\n - EAGER_SERVICE_LOADING=1\r\n - S3_SKIP_SIGNATURE_VALIDATION=1\r\n - DEBUG=1\r\n volumes:\r\n - /var/run/docker.sock:/var/run/docker.sock\r\n network_mode: bridge\r\n```\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\n\r\nA test case available at [GitHub](https://github.com/abbaseya/localstack-msk-lambda-test) - test command `./socket.sh`\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: macOS 12.0.1\r\n- LocalStack: latest\r\n- AWS CLI: 2.2.35\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\n#4932 ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [96], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "localstack", "repo_name": "localstack", "base_commit": "c07094dbf52c947e77d952825eb4daabf409655d", "iss_html_url": "https://github.com/localstack/localstack/issues/5516", "iss_label": "type: bug\nstatus: triage needed\nstatus: response required\naws:cognito", "title": "bug: JWT ID Token issued by cognito-idp can not be verified in v0.14.0 but can in 0.11.5", "body": "### Is there an existing issue for this?\r\n\r\n- [X] I have searched the existing issues\r\n\r\n### Current Behavior\r\n\r\nJWT tokens issued by cognito can not be verified.\r\n\r\n### Expected Behavior\r\n\r\nJWT tokens issues by cognito should be verifiable.\r\n\r\n### How are you starting LocalStack?\r\n\r\nWith the `localstack` script\r\n\r\n### Steps To Reproduce\r\n\r\n#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)\r\n\r\n`LOCALSTACK_API_KEY={MY_KEY} SERVICES=cognito-idp,iam,lambda,cloudformation,s3,s3api,sts DISABLE_CORS_CHECKS=1 localstack start`\r\n\r\n`LocalStack CLI 0.14.0.1`\r\n`LocalStack version: 0.14.0`\r\n\r\n#### Client commands (e.g., AWS SDK code snippet, or sequence of \"awslocal\" commands)\r\nCreate the following files in some directory:\r\n`package.json` file:\r\n```json\r\n{\r\n \"name\": \"localstack-jwt\",\r\n \"version\": \"1.0.0\",\r\n \"description\": \"\",\r\n \"main\": \"index.js\",\r\n \"scripts\": {\r\n \"test\": \"echo \\\"Error: no test specified\\\" && exit 1\"\r\n },\r\n \"keywords\": [],\r\n \"author\": \"\",\r\n \"license\": \"ISC\",\r\n \"dependencies\": {\r\n \"jsonwebtoken\": \"^8.5.1\",\r\n \"jwk-to-pem\": \"^2.0.5\",\r\n \"node-fetch\": \"^2.6.7\"\r\n }\r\n}\r\n\r\n```\r\n`create-user-pool.json` file:\r\n\r\n```json\r\n{\r\n \"PoolName\": \"test\",\r\n \"Policies\": {\r\n \"PasswordPolicy\": {\r\n \"MinimumLength\": 6,\r\n \"RequireUppercase\": false,\r\n \"RequireLowercase\": false,\r\n \"RequireNumbers\": false,\r\n \"RequireSymbols\": false,\r\n \"TemporaryPasswordValidityDays\": 5\r\n }\r\n },\r\n \"AdminCreateUserConfig\": {\r\n \"AllowAdminCreateUserOnly\": false,\r\n \"UnusedAccountValidityDays\": 5\r\n }\r\n}\r\n\r\n```\r\n\r\n`localstack.js` file:\r\n```javascript\r\nconst jwkToPem = require('jwk-to-pem');\r\nconst jwt = require('jsonwebtoken');\r\nconst ps = require('process');\r\nconst fetch = require('node-fetch');\r\n(async () => {\r\n const token = ps.argv[2];\r\n console.log('<== TOKEN:', token);\r\n console.log('==> http://localhost:4566/userpool/.well-known/jwks.json')\r\n const jwksResponse = await fetch('http://localhost:4566/userpool/.well-known/jwks.json');\r\n const jwks = await jwksResponse.json();\r\n console.log('<==', jwks);\r\n\r\n let decodedToken = jwt.decode(token, { complete: true });\r\n console.log('DECODED TOKEN:', decodedToken);\r\n const publicKey = jwkToPem(jwks.keys[0]);\r\n console.log('PUBLIC KEY:', publicKey);\r\n try {\r\n const decoded = jwt.verify(token, publicKey);\r\n console.log('!!! JWT is valid');\r\n } catch (err) {\r\n console.error('!!! ERROR:', err.message);\r\n }\r\n\r\n})();\r\n```\r\n\r\n`setup.sh` file:\r\n```bash\r\n#!/bin/bash\r\necho \"Creating User Pool\"\r\nUSERNAME=user1\r\nPASSWORD=password1\r\nUSER_POOL_ID=$( aws --endpoint-url=http://localhost:4566 cognito-idp create-user-pool \\\r\n --pool-name test \\\r\n --cli-input-json file://create-user-pool.json | jq -r '.UserPool.Id' )\r\necho \"User Pool Created: ${USER_POOL_ID}\"\r\n\r\necho \"Creating User Pool Client\"\r\nCLIENT_ID=$( aws --endpoint-url=http://localhost:4566 cognito-idp create-user-pool-client \\\r\n--user-pool-id \"$USER_POOL_ID\" \\\r\n--client-name client \\\r\n--explicit-auth-flows ALLOW_USER_PASSWORD_AUTH | jq -r '.UserPoolClient.ClientId')\r\necho \"User Pool Created: ${CLIENT_ID}\"\r\n\r\necho \"Sign Up User: user1/password1\"\r\naws --endpoint-url=http://localhost:4566 cognito-idp sign-up \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --username \"$USERNAME\" \\\r\n --password \"$PASSWORD\" && echo \"Sign Up Success\" || echo \"Failed to Sign Up\"\r\n\r\necho \"Please enter confirmation code printed in terminal with 'localstack start' and hit Enter:\"\r\nread CONFIRMATION_CODE\r\n\r\naws --endpoint-url=http://localhost:4566 cognito-idp confirm-sign-up \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --username \"$USERNAME\" \\\r\n --confirmation-code \"$CONFIRMATION_CODE\" && echo \"User Confirmed\" || echo \"Unable to confirm\"\r\n\r\necho \"Authenticating User\"\r\nID_TOKEN=$( aws --endpoint-url=http://localhost:4566 cognito-idp initiate-auth \\\r\n --auth-flow USER_PASSWORD_AUTH \\\r\n --client-id \"$CLIENT_ID\" \\\r\n --auth-parameters USERNAME=\"$USERNAME\",PASSWORD=\"$PASSWORD\" | jq -r '.AuthenticationResult.IdToken' )\r\n\r\necho \"Validating ID TOKEN\"\r\nnode localstack.js \"$ID_TOKEN\"\r\n\r\n```\r\n\r\n## Run\r\n* `npm install`\r\n* start localstack `LOCALSTACK_API_KEY={MY_KEY} SERVICES=cognito-idp,iam,lambda,cloudformation,s3,s3api,sts DISABLE_CORS_CHECKS=1 localstack start`\r\n* run `./setup.sh`\r\n* script will ask for confirmation code printed in localstack console\r\n* finally script will output `!!! ERROR: invalid signature`\r\n\r\n## Try the same with 0.11.5\r\n* `./setup.sh` will print `!!! JWT is valid`\r\n\r\n\r\n### Environment\r\n\r\n```markdown\r\n- OS: MacOS Monterey 12.2.1\r\n- LocalStack: 0.14.0\r\n```\r\n\r\n\r\n### Anything else?\r\n\r\nRepository with scripts you can use to reproduce issue: https://github.com/poul-kg/localstack-jwt", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [82], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/499", "iss_label": "Bug", "title": "raise Exception(\"`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\")", "body": "### Describe the bug\n\nFresh install on ubuntu 22,\r\nI'm using interpreter in terminal.\r\n\r\nAfter sending a prompt, at some point on the answer the program crashes\r\n```\r\n\r\n> Traceback (most recent call last):\r\n File \"/home/fauxprophet/Documents/Ops/openai/bin/interpreter\", line 8, in <module>\r\n sys.exit(cli())\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 21, in cli\r\n cli(self)\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/cli/cli.py\", line 145, in cli\r\n interpreter.chat()\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 65, in chat\r\n for _ in self._streaming_chat(message=message, display=display):\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 86, in _streaming_chat\r\n yield from terminal_interface(self, message)\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/terminal_interface/terminal_interface.py\", line 50, in terminal_interface\r\n for chunk in interpreter.chat(message, display=False, stream=True):\r\n File \"/home/fauxprophet/Documents/Ops/openai/lib/python3.10/site-packages/interpreter/core/core.py\", line 106, in _streaming_chat\r\n raise Exception(\"`interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\")\r\nException: `interpreter.chat()` requires a display. Set `display=True` or pass a message into `interpreter.chat(message)`.\r\n\r\n```\r\n\n\n### Reproduce\n\n1. open terminal\r\n2. run cmd : \"interpreter\"\r\n3. ask something like \"can you change the color of my termninal? provide me with a few different options, and let me choose using a keystroke (1,2,3)?\"\r\n4. Wait for answers\r\n5. While answering the program crashes\n\n### Expected behavior\n\nNot crash\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.1.5\n\n### Python version\n\n3.10.12\n\n### Operating System name and version\n\nUbuntu 22\n\n### Additional context\n\n_No response_", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'dee41b6932a0d9b5569b1abf9144b7ffd8c3c7ad', 'files': [{'path': 'interpreter/core/core.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/core/core.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/15", "iss_label": "", "title": "Error: cannot import name 'cli' from 'interpreter'", "body": "```console\r\n\r\n\u2570\u2500$ uname -a\r\nLinux lab 6.2.0-26-generic #26~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Thu Jul 13 16:27:29 UTC 2 x86_64 x86_64 x86_64 GNU/Linux\r\n\r\n\u2570\u2500$ pip --version 1 \u21b5\r\npip 22.0.2 from /usr/lib/python3/dist-packages/pip (python 3.10)\r\n\r\n\u2570\u2500$ interpreter \r\nTraceback (most recent call last):\r\n File \"/usr/local/bin/interpreter\", line 5, in <module>\r\n from interpreter import cli\r\nImportError: cannot import name 'cli' from 'interpreter' (unknown location)\r\n\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1bb7b19eeb4264f0d7b6410409af6f1cdbf31f3d', 'files': [{'path': 'interpreter/interpreter.py', 'Loc': {'(None, None, None)': {'mod': [1]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["interpreter/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "36ec07125efec86594c91e990f68e0ab214e7edf", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/1548", "iss_label": "", "title": "run interpreter --model ollama/qwen2.5:3b error", "body": "### Bug Description\r\n\r\nWhen executing the command `interpreter --model ollama/qwen2.5:3b`, an error occurs with the specific error message:\r\n\r\n```\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)\r\n```\r\n\r\nThis error indicates that there is an unterminated string while trying to parse a JSON string, which usually happens when the response data is incomplete or improperly formatted.\r\n\r\n### Error Log\r\n\r\n```plaintext\r\n\r\nC:\\Users\\unsia>interpreter --model ollama/qwen2.5:3b\r\n\r\n\u258c Model set to ollama/qwen2.5:3b\r\n\r\nLoading qwen2.5:3b...\r\n\r\nTraceback (most recent call last):\r\n File \"<frozen runpy>\", line 198, in _run_module_as_main\r\n File \"<frozen runpy>\", line 88, in _run_code\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Scripts\\interpreter.exe\\__main__.py\", line 7, in <module>\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\start_terminal_interface.py\", line 612, in main\r\n start_terminal_interface(interpreter)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\start_terminal_interface.py\", line 560, in start_terminal_interface\r\n validate_llm_settings(\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\terminal_interface\\validate_llm_settings.py\", line 109, in validate_llm_settings\r\n interpreter.llm.load()\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 397, in load\r\n self.interpreter.computer.ai.chat(\"ping\")\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\computer\\ai\\ai.py\", line 134, in chat\r\n for chunk in self.computer.interpreter.llm.run(messages):\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 322, in run\r\n yield from run_tool_calling_llm(self, params)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\run_tool_calling_llm.py\", line 178, in run_tool_calling_llm\r\n for chunk in llm.completions(**request_params):\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 466, in fixed_litellm_completions\r\n raise first_error # If all attempts fail, raise the first error\r\n ^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\interpreter\\core\\llm\\llm.py\", line 443, in fixed_litellm_completions\r\n yield from litellm.completion(**params)\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\litellm\\llms\\ollama.py\", line 455, in ollama_completion_stream\r\n raise e\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\site-packages\\litellm\\llms\\ollama.py\", line 433, in ollama_completion_stream\r\n function_call = json.loads(response_content)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\__init__.py\", line 346, in loads\r\n return _default_decoder.decode(s)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\decoder.py\", line 337, in decode\r\n obj, end = self.raw_decode(s, idx=_w(s, 0).end())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\unsia\\AppData\\Local\\Programs\\Python\\Python311\\Lib\\json\\decoder.py\", line 353, in raw_decode\r\n obj, end = self.scan_once(s, idx)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\njson.decoder.JSONDecodeError: Unterminated string starting at: line 1 column 2 (char 1)\r\n\r\n\r\n```\r\n\r\n### Analysis Process\r\n\r\n- **Call Stack**: The error occurs in the file `litellm/llms/ollama.py` when attempting to parse the model's response using `json.loads(response_content)`.\r\n- **Potential Causes**:\r\n - The format of the data returned by the model may not meet expectations.\r\n - It might be due to network issues, server-side problems, or the model's response format being non-compliant, leading to empty or partial responses from the model.\r\n\r\n### Suggested Solutions\r\n\r\n1. **Check the Model's Response**: Ensure that the API response from the model is complete and properly formatted as JSON. Debugging can be facilitated by printing out `response_content`.\r\n2. **Catch Errors and Print More Information**: Before calling `json.loads()`, add checks to ensure that `response_content` is indeed a valid JSON string.\r\n\r\nExample Code:\r\n\r\n```python\r\nif response_content:\r\n try:\r\n parsed_data = json.loads(response_content)\r\n except json.JSONDecodeError as e:\r\n print(f\"JSON Decode Error: {e}\")\r\n print(f\"Response content: {response_content}\")\r\nelse:\r\n print(\"Empty response content\")\r\n```\r\n\r\n### Steps to Reproduce\r\n\r\nTo be filled with specific steps to reproduce this issue.\r\n\r\n### Expected Behavior\r\n\r\nTo be filled with the expected behavior from the user's perspective.\r\n\r\n### Environment Information\r\n\r\n- **Open Interpreter Version**: Open Interpreter 0.4.3 Developer Preview\r\n- **Python Version**: Python 3.11.0\r\n- **Operating System**: Windows 11\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '36ec07125efec86594c91e990f68e0ab214e7edf', 'files': [{'path': 'docs/usage/terminal/arguments.mdx', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc\n"}, "loctype": {"code": [], "doc": ["docs/usage/terminal/arguments.mdx"], "test": [], "config": [], "asset": []}} +{"organization": "OpenInterpreter", "repo_name": "open-interpreter", "base_commit": "8fb4668dc7451ac58ac57ba587ed77194469f739", "iss_html_url": "https://github.com/OpenInterpreter/open-interpreter/issues/1175", "iss_label": "", "title": "Error when inporting interpreter", "body": "### Describe the bug\n\nI have the following error when I try to import interpreter:\r\n```\r\nTraceback (most recent call last):\r\n File \"/home/seba/workspace/AutoProgrammer/interpreter.py\", line 1, in <module>\r\n from interpreter import interpreter\r\n File \"/home/seba/workspace/AutoProgrammer/interpreter.py\", line 1, in <module>\r\n from interpreter import interpreter\r\nImportError: cannot import name 'interpreter' from partially initialized module 'interpreter' (most likely due to a circular import\r\n```\r\nI'm not python expert, but can't figure out what I did wrong. I installed open-interpreter with pip, pip in venv, conda but nothing helps. Other libs like crewai have no problem with imports.\n\n### Reproduce\n\n1. install open-interpreter\r\n2. inport into .py file `from interpreter import interpreter`\r\n3. run file\n\n### Expected behavior\n\nImport works\n\n### Screenshots\n\n_No response_\n\n### Open Interpreter version\n\n0.2.4\n\n### Python version\n\n3.11.8\n\n### Operating System name and version\n\nFedora\n\n### Additional context\n\nTested with open-interpreter `0.2.0` and `0.2.4`, python `3.10` and `3.11`", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"path": "/home/seba/workspace/AutoProgrammer/interpreter.py"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["/home/seba/workspace/AutoProgrammer/interpreter.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "3bc25680529cdb6b5d407c8332e820aeb2e0b948", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/66", "iss_label": "", "title": "WebSocket error code", "body": "\r\n\"Your demonstration website has the same error, please take a look.\"", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '3bc25680529cdb6b5d407c8332e820aeb2e0b948', 'files': [{'path': 'docker-compose.yml', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Config"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "2f88cf9b2568163954ecc7c20ef9879263bfc9ba", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/476", "iss_label": "", "title": "Error generating code. Please contact support.", "body": "I have already started the project both frontend and backend but when placing the image I get the following error \"Error generating code. Please contact support.\" Could you help me with this problem?\r\n![image](https://github.com/user-attachments/assets/a71c97fe-c3c2-419e-b036-0a74ee577279)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "1", "info_type": "Other\n\u73af\u5883\u53d8\u91cf\n\u6587\u6863\u7684\u4e00\u4e2aloc\u7684\u8bef\u89e3"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "4e30b207c1ee9ddad05a37c31a11ac5a182490b7", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/270", "iss_label": "", "title": "Error configuring ANTHROPIC API KEY in.env file", "body": "I added \"ANTHROPIC_API_KEY=s****\" to the.env file\r\n\r\n\"No Anthropic API key found. Please add the environment variable ANTHROPIC_API_KEY to backend/.env\"\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '4e30b207c1ee9ddad05a37c31a11ac5a182490b7', 'files': [{'path': 'backend/config.py', 'Loc': {'(None, None, None)': {'mod': [6]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": ["backend/config.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "226af5bf4183539c97c7bab825cb9324b8c570c0", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/136", "iss_label": "", "title": "error generating code ", "body": "Error generating code. Check the Developer Console AND the backend logs for details. Feel free to open a Github issue.\r\n\r\nwhile hiiting the url and pasting the screenshot it shows below error ,am i doing it correctly \r\n<img width=\"940\" alt=\"Screenshot 2023-11-30 212304\" src=\"https://github.com/abi/screenshot-to-code/assets/152517537/38d9b1af-125b-45d4-9c4a-cbb600f5ec7d\">\r\n<img width=\"940\" alt=\"Screenshot 2023-11-30 212304\" src=\"https://github.com/abi/screenshot-to-code/assets/152517537/9c5bf85b-8109-44f7-842d-ec69dd2c49d0\">\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '226af5bf4183539c97c7bab825cb9324b8c570c0', 'files': [{'path': 'Troubleshooting.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": ["Troubleshooting.md"], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/452", "iss_label": "", "title": "build failed", "body": "**Describe the bug**\r\nDocker container Exited for `screenshot-to-code-main-frontend-1`\r\n\r\n**To Reproduce**\r\nOS: Ubuntu 22.04.4 LTS\r\nDocker Compose version v2.28.1\r\nBuild version: (commit id) b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1\r\n\r\n\r\n**Screenshots of backend AND frontend terminal logs**\r\nNginx conf\r\n```\r\n location /screenshot {\r\n proxy_set_header Host $host;\r\n proxy_set_header X-Real-IP $remote_addr;\r\n proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;\r\n proxy_set_header X-Forwarded-Proto $scheme;\r\n proxy_send_timeout 1000;\r\n proxy_read_timeout 1000;\r\n send_timeout 1000;\r\n client_max_body_size 5M;\r\n proxy_pass http://127.0.0.1:5173;\r\n }\r\n```\r\n```\r\n~ docker logs --tail 444 screenshot-to-code-main-frontend-1\r\nyarn run v1.22.22\r\n$ vite --host 0.0.0.0\r\n\r\n VITE v4.5.0 ready in 1390 ms\r\n\r\n \u279c Local: http://localhost:5173/\r\n \u279c Network: http://172.20.0.3:5173/\r\n\r\n ERROR \r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n\r\n\r\n WARN Browserslist: caniuse-lite is outdated. Please run:\r\n npx update-browserslist-db@latest\r\n Why you should do it regularly: https://github.com/browserslist/update-db#readme\r\n\r\nfile:///app/tailwind.config.js:2\r\nmodule.exports = {\r\n^\r\n\r\nReferenceError: module is not defined\r\n at file:///app/tailwind.config.js:2:1\r\n at ModuleJobSync.runSync (node:internal/modules/esm/module_job:395:35)\r\n at ModuleLoader.importSyncForRequire (node:internal/modules/esm/loader:329:47)\r\n at loadESMFromCJS (node:internal/modules/cjs/loader:1414:24)\r\n at Module._compile (node:internal/modules/cjs/loader:1547:5)\r\n at Object..js (node:internal/modules/cjs/loader:1677:16)\r\n at Module.load (node:internal/modules/cjs/loader:1318:32)\r\n at Function._load (node:internal/modules/cjs/loader:1128:12)\r\n at TracingChannel.traceSync (node:diagnostics_channel:322:14)\r\n at wrapModuleLoad (node:internal/modules/cjs/loader:219:24)\r\n at Module.require (node:internal/modules/cjs/loader:1340:12)\r\n at require (node:internal/modules/helpers:138:16)\r\n at /app/node_modules/tailwindcss/lib/lib/load-config.js:35:27\r\n at loadConfig (/app/node_modules/tailwindcss/lib/lib/load-config.js:39:6)\r\n at getTailwindConfig (/app/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:71:116)\r\n at /app/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:100:92\r\n at /app/node_modules/tailwindcss/lib/processTailwindFeatures.js:48:11\r\n at plugins (/app/node_modules/tailwindcss/lib/plugin.js:38:69)\r\n at LazyResult.runOnRoot (/app/node_modules/postcss/lib/lazy-result.js:329:16)\r\n at LazyResult.runAsync (/app/node_modules/postcss/lib/lazy-result.js:258:26)\r\n at LazyResult.async (/app/node_modules/postcss/lib/lazy-result.js:160:30)\r\n at LazyResult.then (/app/node_modules/postcss/lib/lazy-result.js:404:17)\r\n\r\nNode.js v22.12.0\r\nerror Command failed with exit code 1.\r\ninfo Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.\r\n```\r\n![image](https://github.com/user-attachments/assets/498ddae4-247e-4641-811b-28b197c7aeef)\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1', 'files': [{'path': 'frontend/tailwind.config.js', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/tailwind.config.js"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "214163b0e02176333b5543740cf6262e5da99602", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/268", "iss_label": "", "title": "model evaluation method", "body": "How to evaluate the performance of the model on generalized data, such as comparing the original screenshots with the generated results? Are there any indicators?", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '214163b0e02176333b5543740cf6262e5da99602', 'files': [{'path': 'blog/evaluating-claude.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["blog/evaluating-claude.md"], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/443", "iss_label": "", "title": "ReferenceError: module is not defined", "body": "When running the frontend yarn dev command, I get the error below.\r\n\r\n\r\nSteps to reproduce the behavior:\r\n1. Go to frontend folder\r\n2. execute: `yarn`\r\n3. execute: `yarn dev`\r\n\r\n\r\nImmediately after executing the yarn dev command, I get a message that says:\r\n```\r\n ERROR 16:31:02\r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n```\r\n\r\nThen when I navigate to http://localhost:5173/, it crashes with the following output:\r\n\r\n```\r\n(base) user@192 frontend % yarn dev \r\nyarn run v1.22.22\r\nwarning ../../../package.json: No license field\r\n$ vite\r\n 16:31:00\r\n VITE v4.5.0 ready in 544 ms\r\n\r\n \u279c Local: http://localhost:5173/ 16:31:00\r\n \u279c Network: use --host to expose 16:31:00\r\n \u279c press h to show help 16:31:00\r\n\r\n ERROR 16:31:02\r\n[TypeScript] Found 0 errors. Watching for file changes.\r\n\r\n\r\n WARN Browserslist: caniuse-lite is outdated. Please run: 16:31:37\r\n npx update-browserslist-db@latest\r\n Why you should do it regularly: https://github.com/browserslist/update-db#readme\r\n\r\n\r\n ERROR (node:91140) ExperimentalWarning: CommonJS module /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js is loading ES Module /Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js using require().\r\nSupport for loading ES Module in require() is an experimental feature and might change at any time\r\n(Use `node --trace-warnings ...` to show where the warning was created)\r\n\r\nfile:///Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js:2\r\nmodule.exports = {\r\n^\r\n\r\nReferenceError: module is not defined\r\n at file:///Users/user/Desktop/screenshot-to-code/frontend/tailwind.config.js:2:1\r\n at ModuleJobSync.runSync (node:internal/modules/esm/module_job:395:35)\r\n at ModuleLoader.importSyncForRequire (node:internal/modules/esm/loader:329:47)\r\n at loadESMFromCJS (node:internal/modules/cjs/loader:1376:24)\r\n at Module._compile (node:internal/modules/cjs/loader:1528:5)\r\n at Object..js (node:internal/modules/cjs/loader:1698:10)\r\n at Module.load (node:internal/modules/cjs/loader:1303:32)\r\n at Function._load (node:internal/modules/cjs/loader:1117:12)\r\n at TracingChannel.traceSync (node:diagnostics_channel:322:14)\r\n at wrapModuleLoad (node:internal/modules/cjs/loader:218:24)\r\n at Module.require (node:internal/modules/cjs/loader:1325:12)\r\n at require (node:internal/modules/helpers:136:16)\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js:35:27\r\n at loadConfig (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/load-config.js:39:6)\r\n at getTailwindConfig (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:71:116)\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/lib/setupTrackingContext.js:100:92\r\n at /Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/processTailwindFeatures.js:48:11\r\n at plugins (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/tailwindcss/lib/plugin.js:38:69)\r\n at LazyResult.runOnRoot (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:329:16)\r\n at LazyResult.runAsync (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:258:26)\r\n at LazyResult.async (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:160:30)\r\n at LazyResult.then (/Users/user/Desktop/screenshot-to-code/frontend/node_modules/postcss/lib/lazy-result.js:404:17)\r\n\r\nNode.js v23.3.0\r\nerror Command failed with exit code 1.\r\ninfo Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.\r\n\r\n```\r\n\r\nEdit: I am running MacOS 15.1 M2 chip.\r\nEdit 2: I only set OpenAI key, I do not intend to use both APIs.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b9076120dc7f610ae4c9d0fdb2b3fbea39f371f1', 'files': [{'path': 'frontend/tailwind.config.js', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["frontend/tailwind.config.js"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/132", "iss_label": "", "title": "Why Connection closed 1006", "body": "![image](https://github.com/abi/screenshot-to-code/assets/19514719/e8d6aa4c-e133-475d-bce6-7309082c0cc2)\r\n\r\n![image](https://github.com/abi/screenshot-to-code/assets/19514719/9e00d1ef-67e2-4e13-9276-4ea4119c12cc)\r\n\r\n![image](https://github.com/abi/screenshot-to-code/assets/19514719/a15e37ce-d0aa-4dfe-896d-3eb0a96a7e63)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1f08d71d4dbc614b6b2eaaddb6f8d5858ca6aa5b', 'files': [{'path': 'backend/main.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["backend/main.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "689783eabd552151fa511e44cba90c14f3ee4dcd", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/83", "iss_label": "", "title": "code error", "body": "Hi, I tried the [online version](https://picoapps.xyz/free-tools/screenshot-to-code) of your tool with my API key but I got an error from that following screenshot \r\n\r\n![Web capture_22-11-2023_22822_www maras-it com](https://github.com/abi/screenshot-to-code/assets/482210/3c331d2e-cd22-4d65-8d4d-003468cd0c2e)\r\n\r\nwhich return this in the console :\r\n\r\n```JS\r\nWebSocket error code CloseEvent\u00a0{isTrusted: true, wasClean: false, code: 1006, reason: '', type: 'close',\u00a0\u2026}isTrusted: truebubbles: falsecancelBubble: falsecancelable: falsecode: 1006composed: falsecurrentTarget: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}defaultPrevented: falseeventPhase: 0reason: \"\"returnValue: truesrcElement: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}target: WebSocket\u00a0{url: 'wss://backend-screenshot-to-code.onrender.com/generate-code', readyState: 3, bufferedAmount: 0, onopen: null, onerror: null,\u00a0\u2026}timeStamp: 70399.80000001192type: \"close\"wasClean: false[[Prototype]]: CloseEventcode: (...)reason: (...)wasClean: (...)constructor: \u0192 CloseEvent()Symbol(Symbol.toStringTag): \"CloseEvent\"bubbles: (...)cancelBubble: (...)cancelable: (...)composed: (...)currentTarget: (...)defaultPrevented: (...)eventPhase: (...)returnValue: (...)srcElement: (...)target: (...)timeStamp: (...)type: (...)get code: \u0192 code()get reason: \u0192 reason()get wasClean: \u0192 wasClean()[[Prototype]]: Event\r\n(anonymous) @ index-9af3e78e.js:225\r\n```\r\n\r\n<img width=\"946\" alt=\"image\" src=\"https://github.com/abi/screenshot-to-code/assets/482210/b8403fbe-fc6b-479d-92ea-5f70610b3d6c\">\r\n\r\nany idea on that topic ?\r\n\r\ndavid\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '689783eabd552151fa511e44cba90c14f3ee4dcd', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "7d6fde2deafa014dc1a90c3b1dcb2ed88680a2ff", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/1", "iss_label": "", "title": "Error: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte", "body": "Hello, thank you for your contribution, I am having the above problem, can you help me?\r\n\r\n` File \"<frozen codecs>\", line 322, in decode\r\nUnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte`", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [".env"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "1", "info_type": "Other\n\u73af\u5883\u53d8\u91cf"}, "loctype": {"code": [], "doc": [], "test": [], "config": [".env"], "asset": []}} +{"organization": "abi", "repo_name": "screenshot-to-code", "base_commit": "fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c", "iss_html_url": "https://github.com/abi/screenshot-to-code/issues/150", "iss_label": "", "title": "Error generating code. Check the Developer Console AND the backend logs for details", "body": "My ChatGPT has access to GPT-VISION. and the web app loads well but when I upload an image. it returns this error 'Error generating code. Check the Developer Console AND the backend logs for details'\r\n<img width=\"466\" alt=\"error\" src=\"https://github.com/abi/screenshot-to-code/assets/100529823/97c337b7-de54-45f9-8def-f984ade50a6d\">\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fcd305d0d26e7ef7b93dd605cbd5ed0e1a5a5e9c', 'files': [{'path': 'docker-compose.yml', 'Loc': {'(None, None, 20)': {'mod': [20]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": ["docker-compose.yml"], "test": [], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "4622b3395276b37e10141fab43ffea33941ca0c2", "iss_html_url": "https://github.com/pytorch/pytorch/issues/2384", "iss_label": "", "title": "How the grad is transferred between layer", "body": "consider a simple example here:\r\n```python\r\nimport torch\r\nfrom torch.autograd import Variable\r\n\r\ninput = Variable(torch.randn(20, 3, 28, 28), requires_grad=True)\r\nm = torch.nn.Conv2d(3, 16, 5)\r\noutput = m(input)\r\n\r\nloss = torch.sum(output)# define loss to perform backprop\r\nm.zero_grad()\r\nloss.backward()\r\n\r\nprint(type(input))\r\nprint(input.grad.size())\r\nprint(type(output))\r\nprint(output.grad)\r\n```\r\nthe output is:\r\n```\r\n<class 'torch.autograd.variable.Variable'>\r\ntorch.Size([20, 3, 28, 28])\r\n<class 'torch.autograd.variable.Variable'>\r\nNone\r\n```\r\nI find the `output.grad` is `None`. I don't know how the `input.grad` is calculated without `output.grad`.\r\nand want to know how to get the values of `output.grad`.\r\n\r\nthanks!", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '4622b3395276b37e10141fab43ffea33941ca0c2', 'files': [{'path': 'torch/autograd/variable.py', 'Loc': {\"('Variable', 'retain_grad', 236)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "3", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["torch/autograd/variable.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "2abcafcfd8beb4f6a22e08532d58f9f09c490f0f", "iss_html_url": "https://github.com/pytorch/pytorch/issues/96983", "iss_label": "module: binaries\ntriaged\nmodule: arm", "title": "PyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support", "body": "### \ud83d\udc1b Describe the bug\r\n\r\nPyTorch 2.0 aarch64 wheels are missing the mkldnn+acl backend support, where as PyTorch 1.13.0 had support.\r\n\r\nSolution:\r\nthe wheels need to be built with the `--enable-mkldnn` option while building them from the pytorch/builder repo.\r\n\r\nexample command for pytorch wheel builder script:\r\n`./build_aarch64_wheel.py --python-version 3.8 --use-docker --keep-running --os ubuntu20_04 --enable-mkldnn --branch release/2.0`\r\n\r\nTo reproduce the issue, create c6g or c7g instance from AWS EC2, and in the below output, look for `USE_MKLDNN=`, this was ON for PyTorch 1.13.0 but OFF for PyTorch2.0.0.\r\n\r\nnon-working scenario\r\n```\r\npip install torch==2.0.0\r\n\r\ntime python3 -c \"import torch; torch.set_num_threads(8); print(torch.__version__, torch.__config__.show(), torch.get_num_threads());a=torch.rand(100, 100, 100); b=torch.rand(100,100, 100); [torch.bmm(a,b).sum() for i in range(1000)]\"\r\n2.0.0 PyTorch built with:\r\n - GCC 10.2\r\n - C++ Version: 201703\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: NO AVX\r\n - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n\r\n```\r\n\r\nworking scenario:\r\n\r\n```\r\npip3 install torch==1.13.0\r\n\r\ntime python3 -c \"import torch; torch.set_num_threads(8); print(torch.__version__, torch.__config__.show(), torch.get_num_threads());a=torch.rand(100, 100, 100); b=torch.rand(100,100, 100); [torch.bmm(a,b).sum() for i in range(1000)]\"\r\n\r\n1.13.0 PyTorch built with:\r\n - GCC 10.2\r\n - C++ Version: 201402\r\n - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)\r\n - OpenMP 201511 (a.k.a. OpenMP 4.5)\r\n - LAPACK is enabled (usually provided by MKL)\r\n - NNPACK is enabled\r\n - CPU capability usage: NO AVX\r\n - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-10/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=open, TORCH_VERSION=1.13.0, USE_CUDA=OFF, USE_CUDNN=OFF, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, \r\n \r\n\r\n\r\n```\r\n\r\n### Versions\r\n```\r\nCollecting environment information...\r\nPyTorch version: 2.0.0\r\nIs debug build: False\r\nCUDA used to build PyTorch: None\r\nROCM used to build PyTorch: N/A\r\n\r\nOS: Ubuntu 20.04.5 LTS (aarch64)\r\nGCC version: (Ubuntu 10.3.0-1ubuntu1~20.04) 10.3.0\r\nClang version: Could not collect\r\nCMake version: version 3.25.2\r\nLibc version: glibc-2.31\r\n\r\nPython version: 3.8.10 (default, Nov 14 2022, 12:59:47) [GCC 9.4.0] (64-bit runtime)\r\nPython platform: Linux-5.15.0-1028-aws-aarch64-with-glibc2.29\r\nIs CUDA available: False\r\nCUDA runtime version: No CUDA\r\nCUDA_MODULE_LOADING set to: N/A\r\nGPU models and configuration: No CUDA\r\nNvidia driver version: No CUDA\r\ncuDNN version: No CUDA\r\nHIP runtime version: N/A\r\nMIOpen runtime version: N/A\r\nIs XNNPACK available: True\r\n\r\nCPU:\r\nArchitecture: aarch64\r\nCPU op-mode(s): 32-bit, 64-bit\r\nByte Order: Little Endian\r\nCPU(s): 16\r\nOn-line CPU(s) list: 0-15\r\nThread(s) per core: 1\r\nCore(s) per socket: 16\r\nSocket(s): 1\r\nNUMA node(s): 1\r\nVendor ID: ARM\r\nModel: 1\r\nStepping: r1p1\r\nBogoMIPS: 2100.00\r\nL1d cache: 1 MiB\r\nL1i cache: 1 MiB\r\nL2 cache: 16 MiB\r\nL3 cache: 32 MiB\r\nNUMA node0 CPU(s): 0-15\r\nVulnerability Itlb multihit: Not affected\r\nVulnerability L1tf: Not affected\r\nVulnerability Mds: Not affected\r\nVulnerability Meltdown: Not affected\r\nVulnerability Mmio stale data: Not affected\r\nVulnerability Retbleed: Not affected\r\nVulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl\r\nVulnerability Spectre v1: Mitigation; __user pointer sanitization\r\nVulnerability Spectre v2: Mitigation; CSV2, BHB\r\nVulnerability Srbds: Not affected\r\nVulnerability Tsx async abort: Not affected\r\nFlags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 sm3 sm4 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs paca pacg dcpodp svei8mm svebf16 i8mm bf16 dgh rng\r\n\r\nVersions of relevant libraries:\r\n[pip3] numpy==1.24.2\r\n[pip3] torch==2.0.0\r\n[pip3] torchvision==0.14.1\r\n[conda] Could not collect\r\n```\r\n\r\ncc @ezyang @seemethere @malfet", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2abcafcfd8beb4f6a22e08532d58f9f09c490f0f', 'files': [{'path': '.ci/aarch64_linux/build_aarch64_wheel.py', 'Loc': {'(None, None, None)': {'mod': [8]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "2", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [".ci/aarch64_linux/build_aarch64_wheel.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pytorch", "repo_name": "pytorch", "base_commit": "2dff0b3e918530719f7667cb31541f036a25e3f2", "iss_html_url": "https://github.com/pytorch/pytorch/issues/48435", "iss_label": "", "title": "AttributeError: module 'torch.cuda' has no attribute 'comm'", "body": "## \u2753 Questions and Help\r\n\r\nI'm using torch 1.7.0, and get this kind of error\r\n\r\nmy torch is installed via \r\n\r\npip install torch==1.7.0+cu101 torchvision==0.8.1+cu101 torchaudio===0.7.0 -f https://download.pytorch.org/whl/torch_stable.html\r\n\r\nmy os is win10", "code": null, "pr_html_url": null, "commit_html_url": "https://github.com/facebookresearch/InterHand2.6M/commit/874eb9f740ef54c275433d1bd27f8fb8f6a8f17d", "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "facebookresearch", "pro": "InterHand2.6M", "path": ["{'base_commit': '874eb9f740ef54c275433d1bd27f8fb8f6a8f17d', 'files': [{'path': 'common/nets/module.py', 'status': 'modified', 'Loc': {\"('PoseNet', 'soft_argmax_1d', 41)\": {'mod': [43]}}}]}"]}], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "commit", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": ["common/nets/module.py"], "doc": [], "test": [], "config": [], "asset": ["InterHand2.6M"]}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "e8f6013d0349229fd8f7d298952cfe56fc4b8761", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2070", "iss_label": "bug\nstale", "title": "Liaobots and You don't work", "body": "Liaobots and You do not work, they give the following errors:\r\n\r\n```\r\nLiaobots: ResponseStatusError: Response 500: Error\r\n``` \r\n\r\n```\r\nYou: ResponseStatusError: Response 401: {\"status_code\":401,\"request_id\":\"request-id-live-183191e7-adc1-4838-8e29-6e0c5c3ca048\",\"error_type\":\"endpoint_not_authorized_for_sdk\",\"error_message\":\"The project owner has not authorized the SDK to call this endpoint. Please enable it in the dashboard to continue: https://stytch.com/dashboard/sdk-configuration.\",\"error_url\":\"https://stytch.com/docs/api/errors/401#endpoint_not_authorized_for_sdk\"}\r\n``` \r\n@xtekky @hlohaus ", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'e8f6013d0349229fd8f7d298952cfe56fc4b8761', 'files': [{'path': 'g4f/Provider/Liaobots.py', 'Loc': {\"('Liaobots', 'create_async_generator', 111)\": {'mod': [149]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/Liaobots.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "fa2d608822540c9b73350bfa036e8822ade4e23f", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2305", "iss_label": "stale", "title": "ValueError: Unknown model: dall-e-3", "body": "```\r\nC:\\Users\\MAX\\Desktop>pip install -U g4f[all]\r\nDefaulting to user installation because normal site-packages is not writeable\r\nRequirement already satisfied: g4f[all] in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (0.3.3.2)\r\nRequirement already satisfied: requests in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.32.3)\r\nRequirement already satisfied: aiohttp in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.9.3)\r\nRequirement already satisfied: brotli in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.1.0)\r\nRequirement already satisfied: pycryptodome in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.20.0)\r\nRequirement already satisfied: curl-cffi>=0.6.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.7.3)\r\nRequirement already satisfied: cloudscraper in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.2.71)\r\nRequirement already satisfied: certifi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2024.8.30)\r\nRequirement already satisfied: browser-cookie3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.19.1)\r\nRequirement already satisfied: PyExecJS in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.5.1)\r\nRequirement already satisfied: duckduckgo-search>=5.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (6.3.2)\r\nRequirement already satisfied: beautifulsoup4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (4.12.3)\r\nRequirement already satisfied: pywebview in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (5.2)\r\nRequirement already satisfied: platformdirs in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (4.2.2)\r\nRequirement already satisfied: plyer in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.1.0)\r\nRequirement already satisfied: cryptography in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (43.0.0)\r\nRequirement already satisfied: aiohttp-socks in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.8.4)\r\nRequirement already satisfied: pillow in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (10.2.0)\r\nRequirement already satisfied: cairosvg in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (2.7.1)\r\nRequirement already satisfied: werkzeug in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.0.1)\r\nRequirement already satisfied: flask in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (3.0.2)\r\nRequirement already satisfied: loguru in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.7.2)\r\nRequirement already satisfied: fastapi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.109.2)\r\nRequirement already satisfied: uvicorn in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (0.27.0.post1)\r\nRequirement already satisfied: nest-asyncio in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from g4f[all]) (1.6.0)\r\nRequirement already satisfied: cffi>=1.12.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from curl-cffi>=0.6.2->g4f[all]) (1.17.0)\r\nRequirement already satisfied: typing-extensions in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from curl-cffi>=0.6.2->g4f[all]) (4.12.2)\r\nRequirement already satisfied: click>=8.1.7 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from duckduckgo-search>=5.0->g4f[all]) (8.1.7)\r\nRequirement already satisfied: primp>=0.6.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from duckduckgo-search>=5.0->g4f[all]) (0.6.4)\r\nRequirement already satisfied: aiosignal>=1.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.3.1)\r\nRequirement already satisfied: attrs>=17.3.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (23.2.0)\r\nRequirement already satisfied: frozenlist>=1.1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.4.1)\r\nRequirement already satisfied: multidict<7.0,>=4.5 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (6.0.5)\r\nRequirement already satisfied: yarl<2.0,>=1.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from aiohttp->g4f[all]) (1.9.4)\r\nRequirement already satisfied: python-socks<3.0.0,>=2.4.3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->g4f[all]) (2.4.4)\r\nRequirement already satisfied: soupsieve>1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from beautifulsoup4->g4f[all]) (2.5)\r\nRequirement already satisfied: lz4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from browser-cookie3->g4f[all]) (4.3.3)\r\nRequirement already satisfied: pycryptodomex in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from browser-cookie3->g4f[all]) (3.20.0)\r\nRequirement already satisfied: cairocffi in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (1.6.1)\r\nRequirement already satisfied: cssselect2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (0.7.0)\r\nRequirement already satisfied: defusedxml in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (0.7.1)\r\nRequirement already satisfied: tinycss2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cairosvg->g4f[all]) (1.2.1)\r\nRequirement already satisfied: pyparsing>=2.4.7 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cloudscraper->g4f[all]) (3.1.2)\r\nRequirement already satisfied: requests-toolbelt>=0.9.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cloudscraper->g4f[all]) (1.0.0)\r\nRequirement already satisfied: charset-normalizer<4,>=2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (3.3.2)\r\nRequirement already satisfied: idna<4,>=2.5 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (3.6)\r\nRequirement already satisfied: urllib3<3,>=1.21.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from requests->g4f[all]) (2.1.0)\r\nRequirement already satisfied: pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from fastapi->g4f[all]) (2.6.1)\r\nRequirement already satisfied: starlette<0.37.0,>=0.36.3 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from fastapi->g4f[all]) (0.36.3)\r\nRequirement already satisfied: Jinja2>=3.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (3.1.3)\r\nRequirement already satisfied: itsdangerous>=2.1.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (2.1.2)\r\nRequirement already satisfied: blinker>=1.6.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from flask->g4f[all]) (1.7.0)\r\nRequirement already satisfied: MarkupSafe>=2.1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from werkzeug->g4f[all]) (2.1.5)\r\nRequirement already satisfied: colorama>=0.3.4 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from loguru->g4f[all]) (0.4.6)\r\nRequirement already satisfied: win32-setctime>=1.0.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from loguru->g4f[all]) (1.1.0)\r\nRequirement already satisfied: six>=1.10.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from PyExecJS->g4f[all]) (1.16.0)\r\nRequirement already satisfied: proxy-tools in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (0.1.0)\r\nRequirement already satisfied: bottle in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (0.13.1)\r\nRequirement already satisfied: pythonnet in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pywebview->g4f[all]) (3.0.3)\r\nRequirement already satisfied: h11>=0.8 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from uvicorn->g4f[all]) (0.14.0)\r\nRequirement already satisfied: pycparser in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cffi>=1.12.0->curl-cffi>=0.6.2->g4f[all]) (2.22)\r\nRequirement already satisfied: annotated-types>=0.4.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->g4f[all]) (0.6.0)\r\nRequirement already satisfied: pydantic-core==2.16.2 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pydantic!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0,>=1.7.4->fastapi->g4f[all]) (2.16.2)\r\nRequirement already satisfied: async-timeout>=3.0.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from python-socks[asyncio]<3.0.0,>=2.4.3->aiohttp-socks->g4f[all]) (4.0.3)\r\nRequirement already satisfied: anyio<5,>=3.4.0 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from starlette<0.37.0,>=0.36.3->fastapi->g4f[all]) (4.2.0)\r\nRequirement already satisfied: webencodings in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from cssselect2->cairosvg->g4f[all]) (0.5.1)\r\nRequirement already satisfied: clr-loader<0.3.0,>=0.2.6 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from pythonnet->pywebview->g4f[all]) (0.2.6)\r\nRequirement already satisfied: sniffio>=1.1 in c:\\users\\max\\appdata\\local\\packages\\pythonsoftwarefoundation.python.3.12_qbz5n2kfra8p0\\localcache\\local-packages\\python312\\site-packages (from anyio<5,>=3.4.0->starlette<0.37.0,>=0.36.3->fastapi->g4f[all]) (1.3.0)\r\n\r\nC:\\Users\\MAX\\Desktop>\r\nTraceback (most recent call last):.py\r\n File \"C:\\Users\\MAX\\Desktop\\gptimg.py\", line 4, in <module>\r\n response = client.images.generate(\r\n ^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\MAX\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\g4f\\client\\client.py\", line 421, in generate\r\n return asyncio.run(self.async_generate(prompt, model, response_format=response_format, **kwargs))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\runners.py\", line 194, in run\r\n return runner.run(main)\r\n ^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\runners.py\", line 118, in run\r\n return self._loop.run_until_complete(task)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"C:\\Program Files\\WindowsApps\\PythonSoftwareFoundation.Python.3.12_3.12.2032.0_x64__qbz5n2kfra8p0\\Lib\\asyncio\\base_events.py\", line 687, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"C:\\Users\\MAX\\AppData\\Local\\Packages\\PythonSoftwareFoundation.Python.3.12_qbz5n2kfra8p0\\LocalCache\\local-packages\\Python312\\site-packages\\g4f\\client\\client.py\", line 426, in async_generate\r\n raise ValueError(f\"Unknown model: {model}\")\r\nValueError: Unknown model: dall-e-3\r\n```", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'fa2d608822540c9b73350bfa036e8822ade4e23f', 'files': [{'path': 'g4f/models.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/models.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "1ade1d959cbc9aea7cf653bbe5b6c414ba486c97", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1292", "iss_label": "bug\nstale", "title": "RecursionError: maximum recursion depth exceeded while calling a Python object", "body": "Ubuntu 22, g4f-0.1.9.0, pip installation method, python3.10\r\n\r\n**Bug description**\r\nG4F API has these errors after 5-10 requests. I have to restart constantly. It is very uncomfortable. This problem did not exist in the previous version.\r\n\r\n**Errors**\r\n```\r\nRecursionError: maximum recursion depth exceeded in comparison\r\nRecursionError: maximum recursion depth exceeded while calling a Python object\r\nRuntimeError: RetryProvider failed:\r\nYou: RecursionError: maximum recursion depth exceeded\r\nChatgpt4Online: RecursionError: maximum recursion depth exceeded in comparison\r\nChatAnywhere: RecursionError: maximum recursion depth exceeded while encoding a JSON object\r\nChatgptX: RecursionError: maximum recursion depth exceeded in comparison\r\nGptForLove: RuntimeUnavailableError: Could not find an available JavaScript runtime.\r\nChatBase: RecursionError: maximum recursion depth exceeded while encoding a JSON object\r\nGptGo: RecursionError: maximum recursion depth exceeded while calling a Python object\r\n```\r\n\r\n**Traceback**\r\n```\r\nERROR: Exception in ASGI application\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/api/__init__.py\", line 85, in chat_completions\r\n response = g4f.ChatCompletion.create(\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/__init__.py\", line 76, in create\r\n return result if stream else ''.join(result)\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/Provider/retry_provider.py\", line 59, in create_completion\r\n self.raise_exceptions()\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/Provider/retry_provider.py\", line 87, in raise_exceptions\r\n raise RuntimeError(\"\\n\".join([\"RetryProvider failed:\"] + [\r\nRuntimeError: RetryProvider failed:\r\nChatAnywhere: RecursionError: maximum recursion depth exceeded\r\nChatBase: RecursionError: maximum recursion depth exceeded\r\nChatgptX: RecursionError: maximum recursion depth exceeded\r\nYou: RecursionError: maximum recursion depth exceeded while calling a Python object\r\nGptGo: RecursionError: maximum recursion depth exceeded\r\nChatgpt4Online: RecursionError: maximum recursion depth exceeded\r\nGptForLove: RecursionError: maximum recursion depth exceeded\r\n\r\nDuring handling of the above exception, another exception occurred:\r\n\r\nTraceback (most recent call last):\r\n File \"/usr/local/lib/python3.10/dist-packages/uvicorn/protocols/http/h11_impl.py\", line 408, in run_asgi\r\n result = await app( # type: ignore[func-returns-value]\r\n File \"/usr/local/lib/python3.10/dist-packages/uvicorn/middleware/proxy_headers.py\", line 84, in __call__\r\n return await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/applications.py\", line 1106, in __call__\r\n await super().__call__(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/applications.py\", line 122, in __call__\r\n await self.middleware_stack(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py\", line 184, in __call__\r\n raise exc\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/errors.py\", line 162, in __call__\r\n await self.app(scope, receive, _send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py\", line 79, in __call__\r\n raise exc\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/middleware/exceptions.py\", line 68, in __call__\r\n await self.app(scope, receive, sender)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py\", line 20, in __call__\r\n raise e\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/middleware/asyncexitstack.py\", line 17, in __call__\r\n await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 718, in __call__\r\n await route.handle(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 276, in handle\r\n await self.app(scope, receive, send)\r\n File \"/usr/local/lib/python3.10/dist-packages/starlette/routing.py\", line 66, in app\r\n response = await func(request)\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/routing.py\", line 274, in app\r\n raw_response = await run_endpoint_function(\r\n File \"/usr/local/lib/python3.10/dist-packages/fastapi/routing.py\", line 191, in run_endpoint_function\r\n return await dependant.call(**values)\r\n File \"/usr/local/lib/python3.10/dist-packages/g4f/api/__init__.py\", line 91, in chat_completions\r\n logging.exception(e)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 2113, in exception\r\n error(msg, *args, exc_info=exc_info, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 2105, in error\r\n root.error(msg, *args, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1506, in error\r\n self._log(ERROR, msg, args, **kwargs)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1624, in _log\r\n self.handle(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1634, in handle\r\n self.callHandlers(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1696, in callHandlers\r\n hdlr.handle(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 968, in handle\r\n self.emit(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 1100, in emit\r\n msg = self.format(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 943, in format\r\n return fmt.format(record)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 686, in format\r\n record.exc_text = self.formatException(record.exc_info)\r\n File \"/usr/lib/python3.10/logging/__init__.py\", line 636, in formatException\r\n traceback.print_exception(ei[0], ei[1], tb, None, sio)\r\n File \"/usr/lib/python3.10/traceback.py\", line 120, in print_exception\r\n for line in te.format(chain=chain):\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 248, in format\r\n yield from _ctx.emit(exc.format_exception_only())\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 64, in emit\r\n for text in text_gen:\r\n File \"/usr/local/lib/python3.10/dist-packages/exceptiongroup/_formatting.py\", line 335, in format_exception_only\r\n if isinstance(self.__notes__, collections.abc.Sequence):\r\n File \"/usr/lib/python3.10/abc.py\", line 119, in __instancecheck__\r\n return _abc_instancecheck(cls, instance)\r\nRecursionError: maximum recursion depth exceeded in comparison\r\n```\r\n\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '1ade1d959cbc9aea7cf653bbe5b6c414ba486c97', 'files': [{'path': 'g4f/cli.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/cli.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "c159eebd494b1aef06340429b7b62cdfb84f783d", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2556", "iss_label": "bug", "title": "Errors when generating images in the following models:", "body": "Hi!\r\nerrors when generating images in the following models:\r\nResponse 404: The page could not be found\r\nsdxl, playground-v2.5, sd-3\r\n\r\n dall-e-3: Missing \"_U\" cookie\r\n \r\n midjourney: Cannot connect to host image.pollinations.ai:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')]", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c159eebd494b1aef06340429b7b62cdfb84f783d', 'files': [{'path': 'projects/windows/main.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["projects/windows/main.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "b7eee50930dbd782d7c068d1d29cd270b97bc741", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/1710", "iss_label": "bug\nstale", "title": "AttributeError: module 'g4f' has no attribute 'client'", "body": "**Bug description** \r\nWhen trying to run script from Quickstart, i get this error.\r\n\r\nTraceback (most recent call last):\r\n File \"C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py\", line 3, in <module>\r\n engine = g4f.client.Client()\r\nAttributeError: module 'g4f' has no attribute 'client'\r\n\r\n**Environment**\r\nPython version: 3.11.7", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'b7eee50930dbd782d7c068d1d29cd270b97bc741', 'files': [{'path': 'g4f/client/__init__.py', 'Loc': {}}, {'path': 'C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py'}]}", "own_code_loc": [{"path": "C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": ["g4f/client/__init__.py"], "doc": [], "test": ["C:/Users/samso/AppData/Local/Programs/Python/Python311/test.py"], "config": [], "asset": []}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "2a54c36043b9d87b96c4b7699ce194f8523479b8", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/552", "iss_label": "bug", "title": "Unable to fetch the response, Please try again.", "body": "![IMG_20230514_171809.jpg](https://github.com/xtekky/gpt4free/assets/29172927/6263b9db-3362-4c5b-b043-80b62213a61b)\n\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '2a54c36043b9d87b96c4b7699ce194f8523479b8', 'files': [{'path': 'gpt4free/you/__init__.py', 'Loc': {\"('Completion', 'create', 22)\": {'mod': [41]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["gpt4free/you/__init__.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "xtekky", "repo_name": "gpt4free", "base_commit": "c29487cdb522a2655ccff45bdfc33895ed4daf84", "iss_html_url": "https://github.com/xtekky/gpt4free/issues/2078", "iss_label": "bug", "title": "HuggingChat provider is not working - ResponseStatusError: Response 500", "body": "### Bug description\r\n\r\nWhen I try to use the HuggingChat provider, having added a cookies/har file, I always get the same error: `An error occurred: HuggingChat: ResponseStatusError: Response 500:`\r\n\r\n```\r\nUsing HuggingChat provider and CohereForAI/c4ai-command-r-plus model\r\nINFO:werkzeug:192.168.80.1 - - [22/Jun/2024 16:31:48] \"POST /backend-api/v2/conversation HTTP/1.1\" 200 -\r\nERROR:root:Response 500: \r\nTraceback (most recent call last):\r\n File \"/app/g4f/gui/server/api.py\", line 177, in _create_response_stream\r\n for chunk in ChatCompletion.create(**kwargs):\r\n File \"/app/g4f/providers/base_provider.py\", line 223, in create_completion\r\n yield loop.run_until_complete(await_callback(gen.__anext__))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/usr/lib/python3.11/asyncio/base_events.py\", line 654, in run_until_complete\r\n return future.result()\r\n ^^^^^^^^^^^^^^^\r\n File \"/app/g4f/providers/base_provider.py\", line 52, in await_callback\r\n return await callback()\r\n ^^^^^^^^^^^^^^^^\r\n File \"/app/g4f/Provider/HuggingChat.py\", line 99, in create_async_generator\r\n await raise_for_status(response)\r\n File \"/app/g4f/requests/raise_for_status.py\", line 28, in raise_for_status_async\r\n raise ResponseStatusError(f\"Response {response.status}: {message}\")\r\ng4f.errors.ResponseStatusError: Response 500:\r\n```\r\n\r\n### Steps to reproduce\r\n\r\n1. Put your cookies json file / har file for `huggingface.co` in the `har_and_cookies` directory\r\n2. Run gpt4free in Docker using docker compose\r\n3. Open g4f web ui (using OpenAI compatible API (port `1337`) gives the same error, though)\r\n4. Select this provider: `HuggingChat (Auth)`\r\n5. Select any model, for example `CohereForAI/c4ai-command-r-plus`\r\n6. Send any message to the LLM\r\n7. See the error\r\n\r\n### Screenshot\r\n\r\n![image](https://github.com/xtekky/gpt4free/assets/35491968/7afaf19b-4af2-4703-8bf3-c4c02eb511fc)\r\n\r\n### Environment\r\n\r\n- gpt4free version 0.3.2.0 (this git repository, commit `e8f6013d`)\r\n- docker compose\r\n- Ubuntu 22.04.4 LTS x86_64\r\n\r\n-----\r\n\r\nduplicates https://github.com/xtekky/gpt4free/issues/2053 which is closed", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c29487cdb522a2655ccff45bdfc33895ed4daf84', 'files': [{'path': 'g4f/Provider/HuggingChat.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "1", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["g4f/Provider/HuggingChat.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "Z4nzu", "repo_name": "hackingtool", "base_commit": "c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e", "iss_html_url": "https://github.com/Z4nzu/hackingtool/issues/68", "iss_label": "question", "title": "default username and password of social fish", "body": "hay man the tool works fine but what is the default username and password of social fish", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'c81c08c1e9b847b9d1dcdc5b0a90d5de92d7b75e', 'files': [{'path': 'README.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["README.md"], "test": [], "config": [], "asset": []}} +{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "f7026b04f5e5909aa15848b25de2becd675871a9", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/2475", "iss_label": "", "title": "Multinomial Naive Bayes: Scikit and Weka have different results", "body": "Hi All,\nI used the sklearn.naive_bayes.MultinomialNB on a toy example.\nComparing the results with WEKA, I've noticed a quite different AUC.\nScikit (0.579) - Weka (0.664)\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'f7026b04f5e5909aa15848b25de2becd675871a9', 'files': [{'path': 'sklearn/cross_validation.py', 'Loc': {\"(None, 'cross_val_score', 1075)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/cross_validation.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "scikit-learn", "repo_name": "scikit-learn", "base_commit": "0ab5c678bba02888b62b777b4c757e367b3458d5", "iss_html_url": "https://github.com/scikit-learn/scikit-learn/issues/8470", "iss_label": "", "title": "How to let gbdt = GradientBoostingRegressor(), gbdt.fit(X_feature, X_label) know whether the feature of input X is categorical or numerical?", "body": "", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '0ab5c678bba02888b62b777b4c757e367b3458d5', 'files': [{'path': 'sklearn/preprocessing/_encoders.py', 'Loc': {\"('OneHotEncoder', None, 151)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["sklearn/preprocessing/_encoders.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "pandas-dev", "repo_name": "pandas", "base_commit": "184f2dba255f279697cb1d7567428b3e6403c2d0", "iss_html_url": "https://github.com/pandas-dev/pandas/issues/3209", "iss_label": "", "title": "BUG: read_csv: dtype={'id' : np.str}: Datatype not understood", "body": "I have a CSV with several columns. The first of which is a field called `id` with entries of the type `0001`, `0002`, etc. \n\nWhen loading this file, the following works:\n\n``` python\npd.read_csv(my_path, dtype={'id' : np.int})\n```\n\nbut the following doesn't:\n\n``` python\npd.read_csv(my_path, dtype={'id' : np.str})\n```\n\nnor does this either:\n\n``` python\npd.read_csv(my_path, dtype={'id' : str})\n```\n\nI get: `Datatype not understood`\n\nThis is with `pandas-0.10.1`\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [{"Loc": [12, 18], "path": null}], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3\nand\n2", "info_type": "Code"}, "loctype": {"code": [null], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "53011c3d7946dadb8274a4c5c7586ab54edf792d", "iss_html_url": "https://github.com/meta-llama/llama/issues/48", "iss_label": "", "title": "How to run 13B model on 4*16G V100\uff1f", "body": "RuntimeError: CUDA out of memory. Tried to allocate 160.00 MiB (GPU 0; 15.78 GiB total capacity; 14.26 GiB already allocated; 121.19 MiB free; 14.69 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 143) of binary: /opt/conda/envs/torch1.12/bin/python", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "fabawi", "pro": "wrapyfi"}, {"org": "modular-ml", "pro": "wrapyfi-examples_llama"}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["wrapyfi", "wrapyfi-examples_llama"]}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "7e1b864d574fe6f5ff75fa1d028feb269f7152d2", "iss_html_url": "https://github.com/meta-llama/llama/issues/836", "iss_label": "model-usage", "title": "Failed to run llama2-13B but it worked with llama2-7B", "body": "It worked with llama2-7b. But when I tried to run the **llama2-13b** model using this `torchrun --nproc_per_node 2 example_chat_completion.py --ckpt_dir /path/to/llama-2-13b-chat/ --tokenizer_path /path/to/tokenizer.model --max_seq_len 128 --max_batch_size 4`, it didn't work.\r\n\r\nError log in brief: `RuntimeError: CUDA error: invalid device ordinal`\r\n\r\n#### Full error log\r\n```log\r\nWARNING:torch.distributed.run:\r\n*****************************************\r\nSetting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.\r\n*****************************************\r\n> initializing model parallel with size 2\r\n> initializing ddp with size 1\r\n> initializing pipeline with size 1\r\nTraceback (most recent call last):\r\n File \"/home/alex/joy/ml/llama_playground/llama/example_chat_completion.py\", line 104, in <module>\r\n fire.Fire(main)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 141, in Fire\r\n component_trace = _Fire(component, args, parsed_flag_args, context, name)\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 475, in _Fire\r\n component, remaining_args = _CallAndUpdateTrace(\r\n ^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/fire/core.py\", line 691, in _CallAndUpdateTrace\r\n component = fn(*varargs, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/joy/ml/llama_playground/llama/example_chat_completion.py\", line 35, in main\r\n generator = Llama.build(\r\n ^^^^^^^^^^^^\r\n File \"/home/alex/joy/ml/llama_playground/llama/llama/generation.py\", line 92, in build\r\n torch.cuda.set_device(local_rank)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/cuda/__init__.py\", line 350, in set_device\r\n torch._C._cuda_setDevice(device)\r\nRuntimeError: CUDA error: invalid device ordinal\r\nCUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.\r\nFor debugging consider passing CUDA_LAUNCH_BLOCKING=1.\r\nCompile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.\r\n\r\nWARNING:torch.distributed.elastic.multiprocessing.api:Sending process 41031 closing signal SIGTERM\r\nERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 41032) of binary: /home/alex/miniconda3/envs/llama/bin/python\r\nTraceback (most recent call last):\r\n File \"/home/alex/miniconda3/envs/llama/bin/torchrun\", line 33, in <module>\r\n sys.exit(load_entry_point('torch==2.0.1', 'console_scripts', 'torchrun')())\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py\", line 346, in wrapper\r\n return f(*args, **kwargs)\r\n ^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/run.py\", line 794, in main\r\n run(args)\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/run.py\", line 785, in run\r\n elastic_launch(\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/launcher/api.py\", line 134, in __call__\r\n return launch_agent(self._config, self._entrypoint, list(args))\r\n ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\r\n File \"/home/alex/miniconda3/envs/llama/lib/python3.11/site-packages/torch/distributed/launcher/api.py\", line 250, in launch_agent\r\n raise ChildFailedError(\r\ntorch.distributed.elastic.multiprocessing.errors.ChildFailedError:\r\n============================================================\r\nexample_chat_completion.py FAILED\r\n------------------------------------------------------------\r\nFailures:\r\n <NO_OTHER_FAILURES>\r\n------------------------------------------------------------\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-10-02_12:32:27\r\n host : alex-workstation\r\n rank : 1 (local_rank: 1)\r\n exitcode : 1 (pid: 41032)\r\n error_file: <N/A>\r\n traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html\r\n============================================================\r\n(\r\n```\r\n\r\n\r\n#### System Specs\r\ni9 9900K + 16G DDR4 (with 16GB swap) + 2080ti (modded version with 22GB VRAM, the card runs smoothly on Windows and Linux)\r\nOS:\r\nUbuntu 22.04 x86_64\r\nEnvironments:\r\nFrom miniconda\r\n```conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia```\r\n\r\n#### My attempt NO.1 \r\nI set the only GPU RTX 2080ti in the terminal. `export CUDA_VISIBLE_DEVICES=0` **0** is the ID of my RTX 2080ti.\r\nI looked up the GPU id by ```sudo lshw -C display```\r\n\r\nResult.\r\n```log\r\n *-display \r\n description: VGA compatible controller\r\n product: TU102 [GeForce RTX 2080 Ti Rev. A]\r\n vendor: NVIDIA Corporation\r\n physical id: 0\r\n bus info: pci@0000:01:00.0\r\n version: a1\r\n width: 64 bits\r\n clock: 33MHz\r\n capabilities: pm msi pciexpress vga_controller bus_master cap_list rom\r\n configuration: driver=nvidia latency=0\r\n resources: iomemory:2f0-2ef iomemory:2f0-2ef irq:186 memory:de000000-deffffff memory:2fe0000000-2fefffffff memory:2ff0000000-2ff1ffffff ioport:e000(size=128) memory:c0000-dffff\r\n *-display\r\n description: Display controller\r\n product: CoffeeLake-S GT2 [UHD Graphics 630]\r\n vendor: Intel Corporation\r\n physical id: 2\r\n bus info: pci@0000:00:02.0\r\n version: 02\r\n width: 64 bits\r\n clock: 33MHz\r\n capabilities: pciexpress msi pm bus_master cap_list\r\n configuration: driver=i915 latency=0\r\n resources: iomemory:2f0-2ef iomemory:2f0-2ef irq:185 memory:2ffe000000-2ffeffffff memory:2fd0000000-2fdfffffff ioport:f000(size=64)\r\n *-graphics\r\n product: EFI VGA\r\n physical id: 2\r\n logical name: /dev/fb0\r\n capabilities: fb\r\n configuration: depth=32 resolution=2560,1080\r\n```\r\nBut it's still the same error. FYI, when starting to run llama2-13B, the ram usage hadn't even reached 16GB yet.\r\n\r\nWith some testing codes using pytorch\r\n```python\r\nimport torch\r\ndevice_count = torch.cuda.device_count()\r\nprint(f\"Number of available devices: {device_count}\")\r\n\r\nfor i in range(device_count):\r\n print(f\"Device {i}: {torch.cuda.get_device_name(i)}\")\r\n```\r\noutput: \r\n**Number of available devices: 1\r\nDevice 0: NVIDIA GeForce RTX 2080 Ti**\r\n\r\nNvidia SMI info\r\n```log\r\n+---------------------------------------------------------------------------------------+\r\n| NVIDIA-SMI 535.113.01 Driver Version: 535.113.01 CUDA Version: 12.2 |\r\n|-----------------------------------------+----------------------+----------------------+\r\n| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |\r\n| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |\r\n| | | MIG M. |\r\n|=========================================+======================+======================|\r\n| 0 NVIDIA GeForce RTX 2080 Ti Off | 00000000:01:00.0 On | N/A |\r\n| 41% 34C P8 30W / 260W | 288MiB / 22528MiB | 12% Default |\r\n| | | N/A |\r\n+-----------------------------------------+----------------------+----------------------+\r\n \r\n+---------------------------------------------------------------------------------------+\r\n| Processes: |\r\n| GPU GI CI PID Type Process name GPU Memory |\r\n| ID ID Usage |\r\n|=======================================================================================|\r\n| 0 N/A N/A 2216 G /usr/lib/xorg/Xorg 165MiB |\r\n| 0 N/A N/A 2338 G /usr/bin/gnome-shell 34MiB |\r\n| 0 N/A N/A 34805 G ...26077060,3793940789578302769,262144 82MiB |\r\n| 0 N/A N/A 44004 G ...sktop/5088/usr/bin/telegram-desktop 3MiB |\r\n+---------------------------------------------------------------------------------------+\r\n```\r\n\r\n#### My attempt NO.2\r\n\r\nChanged to Pytorch nightly and cuda 12.1 support. `conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia` My Linux is using Nvidia driver version 535.113.01 with cuda 12.2 support.\r\n\r\nPytorch version: 2.2.0.dev20231001\r\nSame error.\r\n\r\n#### My attempt NO.3\r\nDowngrade the Linux driver? (Not tested yet)\r\n\r\n#### My attempt NO.4\r\nUse the Docker version Pytorch and CUDA inside a docker instance. https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch \r\n\r\nAfter downloading the docker image, i started a docker instance by doing so `docker run --gpus all -it --rm nvcr.io/nvidia/pytorch:23.09-py3`\r\n\r\nError\r\n`docker: Error response from daemon: could not select device driver \"\" with capabilities: [[gpu]]`\r\n\r\n\r\n\r\nHow to run llama2-13B-chat or 70B with a RTX graphics card of 22GB RAM? Thanks in advance!\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "meta-llama", "pro": "llama-cookbook", "path": ["examples/README.md", "examples/inference.py"]}], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["examples/inference.py"], "doc": ["examples/README.md"], "test": [], "config": [], "asset": []}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "57b0eb62de0636e75af471e49e2f1862d908d9d8", "iss_html_url": "https://github.com/meta-llama/llama/issues/201", "iss_label": "", "title": "Torchrun distributed running does not work", "body": "Running in a distributed manner either returns an error, or with the simplest example, produce obviously incorrect output.\r\n\r\nThe following is the result of running 13B model across two nodes. Node A:\r\n\r\n`python -m torch.distributed.run --nproc_per_node 1 --nnodes=2 --node_rank=0 --master_addr=\"gpu3.lan\" --master_port=1234 example.py --ckpt_dir $MODELS/65B --tokenizer_path $MODELS/tokenizer.model`\r\n\r\nNode B:\r\n\r\n`python -m torch.distributed.run --nproc_per_node 1 --nnodes=2 --node_rank=1 --master_addr=\"gpu3.lan\" --master_port=1234 example.py --ckpt_dir $MODELS/65B --tokenizer_path $MODELS/tokenizer.model`\r\n\r\nIt does complete without error, but the results are messed up:\r\n\r\n![image](https://user-images.githubusercontent.com/252193/225178366-2c929cd0-3e87-42d4-8bb5-5cc737189959.png)\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '57b0eb62de0636e75af471e49e2f1862d908d9d8', 'files': [{'path': 'example.py', 'Loc': {\"(None, 'setup_model_parallel', 19)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["example.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7", "iss_html_url": "https://github.com/meta-llama/llama/issues/670", "iss_label": "", "title": "Counting tokens for Chat models", "body": "Does anyone how to calculate prompt and completion tokens for Llama Chat models for monitoring purposes?\r\nCan we add this in responses as many times we don't have libraries to achieve this in languages like java, kotlin, etc.\r\n\r\nSimilar to tiktoken by openai - https://github.com/openai/tiktoken", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'ea9f33d6d3ea8ed7d560d270986407fd6c2e52b7', 'files': [{'path': 'llama/tokenizer.py', 'Loc': {\"('Tokenizer', 'encode', 31)\": {'mod': []}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["llama/tokenizer.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "8cd608cc019b306ab6d8b7abd61014b436968086", "iss_html_url": "https://github.com/meta-llama/llama/issues/732", "iss_label": "download-install", "title": "download.sh problem, llama model 70B, results in several 0kb .pth files after download; two separate network locations for testing; reported by several users on different networks; MacOS Apple Silicon M2 Ventura 13.4.1 (c) (22F770820d)", "body": "After verifying that all libraries from the requirements.txt were installed in my python3 environment, in bash terminal I run llama-main/download.sh -- however, upon completion downloading (and overall execution) I am finding that one or more consolidated.0x.pth files are zero kilobytes containing no data. \r\n\r\nI have tried to successfully download all .pth files on both WiFi & Ethernet from two separate networks. One at home, on my Verizon 5g ISP and the other on-campus at MIT. The same result occurs. I have verified disk storage space on both machines I attempted to acquire these files. It seems \" consolidated.05.pth \" fails most often; with the successfully acquired .pth files being 17.25 GB in size. However this morning I am seeing that consolidated.**05**.pth, consolidated.**04**.pth, and consolidated.**00**.pth have failed\r\n\r\nI am discouraged, as I have attempted to acquire these several times and requested a Meta access key twice. \r\n\r\nAre there any recommendations you can provide me with? Other resources, endpoints, or potential port forwards/triggers that might resolve the problem in some way? \r\n\r\nor is this a **bug**?\r\n\r\nThank you for your time!!\r\n\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '8cd608cc019b306ab6d8b7abd61014b436968086', 'files': [{'path': 'download.sh', 'Loc': {'(None, None, 23)': {'mod': [23]}}, 'status': 'modified'}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "2", "iss_reason": "3", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["download.sh"]}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "99e19d4f83b7fe77e8b3b692e01019640d7b457a", "iss_html_url": "https://github.com/meta-llama/llama/issues/493", "iss_label": "download-install", "title": "download.sh: line 2: $'\\r': command not found", "body": "run download.sh by cygwin in windows but it give back \"download.sh: line 2: $'\\r': command not found\"\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '99e19d4f83b7fe77e8b3b692e01019640d7b457a', 'files': [{'path': 'download.sh', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["download.sh"]}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "1076b9c51c77ad06e9d7ba8a4c6df775741732bd", "iss_html_url": "https://github.com/meta-llama/llama/issues/21", "iss_label": "", "title": "Add to huggingface", "body": null, "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "docs", "pro": "transformers", "path": ["model_doc/llama"]}], "analysis": {"iss_type": "4", "iss_reason": "2", "loc_way": "comment", "loc_scope": "", "info_type": "Code"}, "loctype": {"code": [], "doc": ["model_doc/llama"], "test": [], "config": [], "asset": []}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e", "iss_html_url": "https://github.com/meta-llama/llama/issues/751", "iss_label": "documentation", "title": "Run llama2 on specified GPU", "body": "Suppose I have 8 A6000 GPU, I would like to run separate experiments on separate GPU, how can I do it? For example, I want to run chat_completion.py on CUDA:0 and run text_completion.py on CUDA:1 simutaneously. Are there any ways to do it? Thank you.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': '7565eb6fee2175b2d4fe2cfb45067a61b35d7f5e', 'files': [{'path': 'example_text_completion.py', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\nhow can I do it", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Code"}, "loctype": {"code": ["example_text_completion.py"], "doc": [], "test": [], "config": [], "asset": []}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "a102a597d1eb5d437f98dc0b55668ff61bc493b8", "iss_html_url": "https://github.com/meta-llama/llama/issues/740", "iss_label": "download-install", "title": "download.sh: Enter for all models fails", "body": "- Procedure\r\n`source download.sh; <enter url>; <Enter for all models>`\r\n- Result\r\nFolders etc. set up, models not downloaded. 403 Forbidden Error\r\n- TS\r\nWas able to download all models by explicitly passing names as a list", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": ["wget"], "other_rep_loc": [], "analysis": {"iss_type": "1", "iss_reason": "3", "loc_way": "comment", "loc_scope": "3", "info_type": "Code"}, "loctype": {"code": [], "doc": [], "test": [], "config": [], "asset": ["wget"]}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "d7e2e37e163981fd674ea2a633fac2014550898d", "iss_html_url": "https://github.com/meta-llama/llama/issues/795", "iss_label": "", "title": "[Question] Is the Use of Llama2 Forbidden in Languages Other Than English?", "body": "Hello,\r\n\r\nI recently came across a claim from [Baichuan-inc](https://github.com/baichuan-inc) during their live stream event and in the press release for the Baichuan2 model. They stated that Meta prohibits the use of Llama2 in languages other than English.\r\n\r\nHowever, after reviewing the [use policy](https://ai.meta.com/llama/use-policy/) and the [license agreement](https://ai.meta.com/llama/license/) provided by Meta, I couldn't find any specific restriction regarding the model's application language. Additionally, in the `Responsible-Use-Guide.pdf`, there are even mentions of considerations for markets in other languages.\r\n\r\nCould you please clarify if the statement by [Baichuan-inc](https://github.com/baichuan-inc) that \"Meta prohibits the use of Llama2 in languages other than English,\" is accurate? \r\n\r\nThank you!\r\n", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{'base_commit': 'd7e2e37e163981fd674ea2a633fac2014550898d', 'files': [{'path': 'MODEL_CARD.md', 'Loc': {}}]}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [], "analysis": {"iss_type": "3\n\u8be2\u95ee\u5e93\u8bed\u8a00\u652f\u6301\u4fe1\u606f", "iss_reason": "5", "loc_way": "comment", "loc_scope": "0", "info_type": "Doc"}, "loctype": {"code": [], "doc": ["MODEL_CARD.md"], "test": [], "config": [], "asset": []}} +{"organization": "meta-llama", "repo_name": "llama", "base_commit": "57b0eb62de0636e75af471e49e2f1862d908d9d8", "iss_html_url": "https://github.com/meta-llama/llama/issues/227", "iss_label": "documentation\nresearch-paper", "title": "where is the train file?", "body": "where is the train file? I want to learn how to train.", "code": null, "pr_html_url": null, "commit_html_url": null, "file_loc": "{}", "own_code_loc": [], "ass_file_loc": [], "other_rep_loc": [{"org": "meta-llama", "pro": "llama-cookbook", "path": ["llama_finetuning.py"]}], "analysis": {"iss_type": "3", "iss_reason": "5", "loc_way": "comment", "loc_scope": "2", "info_type": "Code\nDoc"}, "loctype": {"code": ["llama_finetuning.py"], "doc": [], "test": [], "config": [], "asset": []}}