organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
scikit-learn | scikit-learn | 559609fe98ec2145788133687e64a6e87766bc77 | https://github.com/scikit-learn/scikit-learn/issues/25525 | Bug
module:feature_extraction | Extend SequentialFeatureSelector example to demonstrate how to use negative tol | ### Describe the bug
I utilized the **SequentialFeatureSelector** for feature selection in my code, with the direction set to "backward." The tolerance value is negative and the selection process stops when the decrease in the metric, AUC in this case, is less than the specified tolerance. Generally, increasing the number of features results in a higher AUC, but sacrificing some features, especially correlated ones that offer little contribution, can produce a pessimistic model with a lower AUC. The code worked as expected in **sklearn 1.1.1**, but when I updated to **sklearn 1.2.1**, I encountered the following error.
### Steps/Code to Reproduce
```python
from sklearn.datasets import load_breast_cancer
from sklearn.linear_model import LogisticRegression
from sklearn.feature_selection import SequentialFeatureSelector
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import Pipeline
X, y = load_breast_cancer(return_X_y=True)
TOL = -0.001
feature_selector = SequentialFeatureSelector(
LogisticRegression(max_iter=1000),
n_features_to_select="auto",
direction="backward",
scoring="roc_auc",
tol=TOL
)
pipe = Pipeline(
[('scaler', StandardScaler()),
('feature_selector', feature_selector),
('log_reg', LogisticRegression(max_iter=1000))]
)
if __name__ == "__main__":
pipe.fit(X, y)
print(pipe['log_reg'].coef_[0])
```
### Expected Results
```
$ python sfs_tol.py
[-2.0429818 0.5364346 -1.35765488 -2.85009904 -2.84603016]
```
### Actual Results
```python-traceback
$ python sfs_tol.py
Traceback (most recent call last):
File "/home/modelling/users-workspace/nsofinij/lab/open-source/sfs_tol.py", line 28, in <module>
pipe.fit(X, y)
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py", line 401, in fit
Xt = self._fit(X, y, **fit_params_steps)
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py", line 359, in _fit
X, fitted_transformer = fit_transform_one_cached(
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/joblib/memory.py", line 349, in __call__
return self.func(*args, **kwargs)
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/pipeline.py", line 893, in _fit_transform_one
res = transformer.fit_transform(X, y, **fit_params)
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_set_output.py", line 142, in wrapped
data_to_wrap = f(self, X, *args, **kwargs)
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py", line 862, in fit_transform
return self.fit(X, y, **fit_params).transform(X)
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/feature_selection/_sequential.py", line 201, in fit
self._validate_params()
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/base.py", line 581, in _validate_params
validate_parameter_constraints(
File "/home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/sklearn/utils/_param_validation.py", line 97, in validate_parameter_constraints
raise InvalidParameterError(
sklearn.utils._param_validation.InvalidParameterError: The 'tol' parameter of SequentialFeatureSelector must be None or a float in the range (0, inf). Got -0.001 instead.
```
### Versions
```shell
System:
python: 3.10.8 | packaged by conda-forge | (main, Nov 22 2022, 08:26:04) [GCC 10.4.0]
executable: /home/modelling/opt/anaconda3/envs/py310/bin/python
machine: Linux-4.14.301-224.520.amzn2.x86_64-x86_64-with-glibc2.26
Python dependencies:
sklearn: 1.2.1
pip: 23.0
setuptools: 66.1.1
numpy: 1.24.1
scipy: 1.10.0
Cython: None
pandas: 1.5.3
matplotlib: 3.6.3
joblib: 1.2.0
threadpoolctl: 3.1.0
Built with OpenMP: True
threadpoolctl info:
user_api: openmp
internal_api: openmp
prefix: libgomp
filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scikit_learn.libs/libgomp-a34b3233.so.1.0.0
version: None
num_threads: 64
user_api: blas
internal_api: openblas
prefix: libopenblas
filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/numpy.libs/libopenblas64_p-r0-15028c96.3.21.so
version: 0.3.21
threading_layer: pthreads
architecture: SkylakeX
num_threads: 64
user_api: blas
internal_api: openblas
prefix: libopenblas
filepath: /home/modelling/opt/anaconda3/envs/py310/lib/python3.10/site-packages/scipy.libs/libopenblasp-r0-41284840.3.18.so
version: 0.3.18
threading_layer: pthreads
architecture: SkylakeX
num_threads: 64
```
| null | https://github.com/scikit-learn/scikit-learn/pull/26205 | null | {'base_commit': '559609fe98ec2145788133687e64a6e87766bc77', 'files': [{'path': 'examples/feature_selection/plot_select_from_model_diabetes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [145], 'mod': [123, 124, 125]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"examples/feature_selection/plot_select_from_model_diabetes.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
pallets | flask | cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4 | https://github.com/pallets/flask/issues/2264 | cli | Handle app factory in FLASK_APP | `FLASK_APP=myproject.app:create_app('dev')`
[
Gunicorn does this with `eval`](https://github.com/benoitc/gunicorn/blob/fbd151e9841e2c87a18512d71475bcff863a5171/gunicorn/util.py#L364), which I'm not super happy with. Instead, we could use `literal_eval` to allow a simple list of arguments. The line should never be so complicated that `eval` would be necessary anyway.
~~~python
# might need to fix this regex
m = re.search(r'(\w+)(\(.*\))', app_obj)
if m:
app = getattr(mod, m.group(1))(*literal_eval(m.group(2)))
~~~ | null | https://github.com/pallets/flask/pull/2326 | null | {'base_commit': 'cb94f4c5d3d4e1797207fd03d20d06c7bc0d05b4', 'files': [{'path': 'flask/cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 12]}, "(None, 'find_best_app', 32)": {'mod': [58, 62, 69, 71]}, "(None, 'call_factory', 82)": {'mod': [82, 83, 84, 85, 86, 88, 89, 90, 91, 92, 93]}, "(None, 'locate_app', 125)": {'mod': [151, 153, 154, 155, 156, 158]}}}, {'path': 'tests/test_cli.py', 'status': 'modified', 'Loc': {"(None, 'test_locate_app', 148)": {'add': [152], 'mod': [154, 155, 156, 157, 158, 159, 160, 161]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"flask/cli.py"
],
"doc": [],
"test": [
"tests/test_cli.py"
],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 737ca72b7bce6e377dd6876eacee63338fa8c30c | https://github.com/localstack/localstack/issues/894 | ERROR:localstack.services.generic_proxy: Error forwarding request: | Starting local dev environment. CTRL-C to quit.
Starting mock API Gateway (http port 4567)...
Starting mock DynamoDB (http port 4569)...
Starting mock SES (http port 4579)...
Starting mock Kinesis (http port 4568)...
Starting mock Redshift (http port 4577)...
Starting mock S3 (http port 4572)...
Starting mock CloudWatch (http port 4582)...
Starting mock CloudFormation (http port 4581)...
Starting mock SSM (http port 4583)...
Starting mock SQS (http port 4576)...
Starting local Elasticsearch (http port 4571)...
Starting mock SNS (http port 4575)...
Starting mock DynamoDB Streams service (http port 4570)...
Starting mock Firehose service (http port 4573)...
Starting mock Route53 (http port 4580)...
Starting mock ES service (http port 4578)...
Starting mock Lambda service (http port 4574)...
2018-08-11T13:33:08:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d442415d0>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
headers=forward_headers)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d442415d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
2018-08-11T13:34:08:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4425fa10>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
headers=forward_headers)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4425fa10>: Failed to establish a new connection: [Errno 111] Connection refused',))
2018-08-11T13:35:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4429c3d0>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
headers=forward_headers)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4429c3d0>: Failed to establish a new connection: [Errno 111] Connection refused',))
2018-08-11T13:36:09:ERROR:localstack.services.generic_proxy: Error forwarding request: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4425f910>: Failed to establish a new connection: [Errno 111] Connection refused',)) Traceback (most recent call last):
File "/home/maruf/.local/lib/python2.7/site-packages/localstack/services/generic_proxy.py", line 201, in forward
headers=forward_headers)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 112, in post
return request('post', url, data=data, json=json, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/api.py", line 58, in request
return session.request(method=method, url=url, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 508, in request
resp = self.send(prep, **send_kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/sessions.py", line 618, in send
r = adapter.send(request, **kwargs)
File "/home/maruf/.local/lib/python2.7/site-packages/requests/adapters.py", line 508, in send
raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='127.0.0.1', port=4564): Max retries exceeded with url: / (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f4d4425f910>: Failed to establish a new connection: [Errno 111] Connection refused',))
| null | https://github.com/localstack/localstack/pull/1526 | null | {'base_commit': '737ca72b7bce6e377dd6876eacee63338fa8c30c', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [186]}}}, {'path': 'localstack/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'localstack/services/kinesis/kinesis_starter.py', 'status': 'modified', 'Loc': {"(None, 'start_kinesis', 14)": {'add': [17], 'mod': [14, 23, 24]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/config.py",
"localstack/services/kinesis/kinesis_starter.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
huggingface | transformers | d2871b29754abd0f72cf42c299bb1c041519f7bc | https://github.com/huggingface/transformers/issues/30 | [Feature request] Add example of finetuning the pretrained models on custom corpus | null | https://github.com/huggingface/transformers/pull/25107 | null | {'base_commit': 'd2871b29754abd0f72cf42c299bb1c041519f7bc', 'files': [{'path': 'src/transformers/modeling_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [75, 108]}, "('PreTrainedModel', 'from_pretrained', 1959)": {'add': [2227]}, "(None, 'load_state_dict', 442)": {'mod': [461]}, "('PreTrainedModel', '_load_pretrained_model', 3095)": {'mod': [3183, 3388, 3389, 3390, 3391, 3392, 3393, 3394, 3395, 3396, 3397, 3398, 3399, 3400, 3401, 3402, 3403, 3404]}}}, {'path': 'src/transformers/trainer.py', 'status': 'modified', 'Loc': {"('Trainer', '__init__', 313)": {'mod': [468, 469, 470]}, "('Trainer', '_wrap_model', 1316)": {'mod': [1382, 1385, 1387]}, "('Trainer', 'train', 1453)": {'mod': [1520]}, "('Trainer', '_inner_training_loop', 1552)": {'mod': [1654]}, "('Trainer', 'create_accelerator_and_postprocess', 3866)": {'mod': [3889]}}}, {'path': 'src/transformers/training_args.py', 'status': 'modified', 'Loc': {"('TrainingArguments', None, 158)": {'add': [464], 'mod': [439, 442, 445, 457]}, "('TrainingArguments', '__post_init__', 1221)": {'add': [1522, 1524, 1585], 'mod': [1529, 1530, 1531, 1533, 1534, 1535, 1536, 1537, 1543, 1544, 1547, 1548, 1550, 1551, 1555, 1556, 1558, 1559, 1560, 1589, 1591, 1593, 1594, 1595, 1596, 1597, 1598, 1599, 1602]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/trainer.py",
"src/transformers/modeling_utils.py",
"src/transformers/training_args.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | ||
pandas-dev | pandas | 51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319 | https://github.com/pandas-dev/pandas/issues/11080 | Indexing
Performance | PERF: checking is_monotonic_increasing/decreasing before sorting on an index | We don't keep the sortedness state in an index per-se, but it is rather cheap to check
- `is_monotonic_increasing` or `is_monotonic_decreasing` on a reg-index
- MultiIndex should check `is_lexsorted` (this might be done already)
```
In [8]: df = DataFrame(np.random.randn(1000000,2),columns=list('AB'))
In [9]: %timeit df.sort_index()
10 loops, best of 3: 37.1 ms per loop
In [10]: %timeit -n 1 -r 1 df.index.is_monotonic_increasing
1 loops, best of 1: 2.01 ms per loop
In [11]: %timeit -n 1 -r 1 df.index.is_monotonic_increasin^C
KeyboardInterrupt
In [11]: %timeit df.set_index('A').sort_index()
10 loops, best of 3: 175 ms per loop
In [12]: %timeit -n 1 -r 1 df.set_index('A').index.is_monotonic_increasing
1 loops, best of 1: 9.54 ms per loop
```
| null | https://github.com/pandas-dev/pandas/pull/11294 | null | {'base_commit': '51a70dcb7133bc7cb8e6bea5da39a2cf58fa8319', 'files': [{'path': 'asv_bench/benchmarks/frame_methods.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [932]}}}, {'path': 'doc/source/whatsnew/v0.17.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [54]}}}, {'path': 'pandas/core/frame.py', 'status': 'modified', 'Loc': {"('DataFrame', 'sort_index', 3126)": {'add': [3159]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/frame.py",
"asv_bench/benchmarks/frame_methods.py"
],
"doc": [
"doc/source/whatsnew/v0.17.1.txt"
],
"test": [],
"config": [],
"asset": []
} | 1 |
zylon-ai | private-gpt | fdb45741e521d606b028984dbc2f6ac57755bb88 | https://github.com/zylon-ai/private-gpt/issues/10 | Suggestions for speeding up ingestion? | I presume I must be doing something wrong, as it is taking hours to ingest a 500kbyte text on an i9-12900 with 128GB. In fact it's not even done yet. Using models are recommended.
Help?
Thanks
Some output:
llama_print_timings: load time = 674.34 ms
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per run)
llama_print_timings: prompt eval time = 12526.78 ms / 152 tokens ( 82.41 ms per token)
llama_print_timings: eval time = 157.46 ms / 1 runs ( 157.46 ms per run)
llama_print_timings: total time = 12715.48 ms | null | https://github.com/zylon-ai/private-gpt/pull/224 | null | {'base_commit': 'fdb45741e521d606b028984dbc2f6ac57755bb88', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4, 15, 17, 23, 25, 28, 58, 62, 86]}}}, {'path': 'example.env', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [2]}}}, {'path': 'ingest.py', 'status': 'modified', 'Loc': {"(None, 'main', 71)": {'add': [79], 'mod': [75, 76, 81, 84, 87, 90]}, '(None, None, None)': {'mod': [22]}}}, {'path': 'privateGPT.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 11]}, "(None, 'main', 20)": {'mod': [21, 22]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"ingest.py",
"privateGPT.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [
"example.env"
],
"asset": []
} | 1 | |
huggingface | transformers | 9fef668338b15e508bac99598dd139546fece00b | https://github.com/huggingface/transformers/issues/9 | Crash at the end of training | Hi, I tried running the Squad model this morning (on a single GPU with gradient accumulation over 3 steps) but after 3 hours of training, my job failed with the following output:
I was running the code, unmodified, from commit 3bfbc21376af691b912f3b6256bbeaf8e0046ba8
Is this an issue you know about?
```
11/08/2018 17:50:03 - INFO - __main__ - device cuda n_gpu 1 distributed training False
11/08/2018 17:50:18 - INFO - __main__ - *** Example ***
11/08/2018 17:50:18 - INFO - __main__ - unique_id: 1000000000
11/08/2018 17:50:18 - INFO - __main__ - example_index: 0
11/08/2018 17:50:18 - INFO - __main__ - doc_span_index: 0
11/08/2018 17:50:18 - INFO - __main__ - tokens: [CLS] to whom did the virgin mary allegedly appear in 1858 in lou ##rdes france ? [SEP] architectural ##ly , the school has a catholic character . atop the main building ' s gold dome is a golden statue of the virgin mary . immediately in front of the main building and facing it , is a copper statue of christ with arms up ##rai ##sed with the legend " ve ##ni ##te ad me om ##nes " . next to the main building is the basilica of the sacred heart . immediately behind the basilica is the gr ##otto , a marian place of prayer and reflection . it is a replica of the gr ##otto at lou ##rdes , france where the virgin mary reputed ##ly appeared to saint bern ##ade ##tte so ##ub ##iro ##us in 1858 . at the end of the main drive ( and in a direct line that connects through 3 statues and the gold dome ) , is a simple , modern stone statue of mary . [SEP]
11/08/2018 17:50:18 - INFO - __main__ - token_to_orig_map: 17:0 18:0 19:0 20:1 21:2 22:3 23:4 24:5 25:6 26:6 27:7 28:8 29:9 30:10 31:10 32:10 33:11 34:12 35:13 36:14 37:15 38:16 39:17 40:18 41:19 42:20 43:20 44:21 45:22 46:23 47:24 48:25 49:26 50:27 51:28 52:29 53:30 54:30 55:31 56:32 57:33 58:34 59:35 60:36 61:37 62:38 63:39 64:39 65:39 66:40 67:41 68:42 69:43 70:43 71:43 72:43 73:44 74:45 75:46 76:46 77:46 78:46 79:47 80:48 81:49 82:50 83:51 84:52 85:53 86:54 87:55 88:56 89:57 90:58 91:58 92:59 93:60 94:61 95:62 96:63 97:64 98:65 99:65 100:65 101:66 102:67 103:68 104:69 105:70 106:71 107:72 108:72 109:73 110:74 111:75 112:76 113:77 114:78 115:79 116:79 117:80 118:81 119:81 120:81 121:82 122:83 123:84 124:85 125:86 126:87 127:87 128:88 129:89 130:90 131:91 132:91 133:91 134:92 135:92 136:92 137:92 138:93 139:94 140:94 141:95 142:96 143:97 144:98 145:99 146:100 147:101 148:102 149:102 150:103 151:104 152:105 153:106 154:107 155:108 156:109 157:110 158:111 159:112 160:113 161:114 162:115 163:115 164:115 165:116 166:117 167:118 168:118 169:119 170:120 171:121 172:122 173:123 174:123
11/08/2018 17:50:18 - INFO - __main__ - token_is_max_context: 17:True 18:True 19:True 20:True 21:True 22:True 23:True 24:True 25:True 26:True 27:True 28:True 29:True 30:True 31:True 32:True 33:True 34:True 35:True 36:True 37:True 38:True 39:True 40:True 41:True 42:True 43:True 44:True 45:True 46:True 47:True 48:True 49:True 50:True 51:True 52:True 53:True 54:True 55:True 56:True 57:True 58:True 59:True 60:True 61:True 62:True 63:True 64:True 65:True 66:True 67:True 68:True 69:True 70:True 71:True 72:True 73:True 74:True 75:True 76:True 77:True 78:True 79:True 80:True 81:True 82:True 83:True 84:True 85:True 86:True 87:True 88:True 89:True 90:True 91:True 92:True 93:True 94:True 95:True 96:True 97:True 98:True 99:True 100:True 101:True 102:True 103:True 104:True 105:True 106:True 107:True 108:True 109:True 110:True 111:True 112:True 113:True 114:True 115:True 116:True 117:True 118:True 119:True 120:True 121:True 122:True 123:True 124:True 125:True 126:True 127:True 128:True 129:True 130:True 131:True 132:True 133:True 134:True 135:True 136:True 137:True 138:True 139:True 140:True 141:True 142:True 143:True 144:True 145:True 146:True 147:True 148:True 149:True 150:True 151:True 152:True 153:True 154:True 155:True 156:True 157:True 158:True 159:True 160:True 161:True 162:True 163:True 164:True 165:True 166:True 167:True 168:True 169:True 170:True 171:True 172:True 173:True 174:True
11/08/2018 17:50:18 - INFO - __main__ - input_ids: 101 2000 3183 2106 1996 6261 2984 9382 3711 1999 8517 1999 10223 26371 2605 1029 102 6549 2135 1010 1996 2082 2038 1037 3234 2839 1012 10234 1996 2364 2311 1005 1055 2751 8514 2003 1037 3585 6231 1997 1996 6261 2984 1012 3202 1999 2392 1997 1996 2364 2311 1998 5307 2009 1010 2003 1037 6967 6231 1997 4828 2007 2608 2039 14995 6924 2007 1996 5722 1000 2310 3490 2618 4748 2033 18168 5267 1000 1012 2279 2000 1996 2364 2311 2003 1996 13546 1997 1996 6730 2540 1012 3202 2369 1996 13546 2003 1996 24665 23052 1010 1037 14042 2173 1997 7083 1998 9185 1012 2009 2003 1037 15059 1997 1996 24665 23052 2012 10223 26371 1010 2605 2073 1996 6261 2984 22353 2135 2596 2000 3002 16595 9648 4674 2061 12083 9711 2271 1999 8517 1012 2012 1996 2203 1997 1996 2364 3298 1006 1998 1999 1037 3622 2240 2008 8539 2083 1017 11342 1998 1996 2751 8514 1007 1010 2003 1037 3722 1010 2715 2962 6231 1997 2984 1012 102 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
11/08/2018 17:50:18 - INFO - __main__ - input_mask: 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
... [truncated] ...
Iteration: 100%|█████████▉| 29314/29324 [3:27:55<00:04, 2.36it/s][A
Iteration: 100%|█████████▉| 29315/29324 [3:27:55<00:03, 2.44it/s][A
Iteration: 100%|█████████▉| 29316/29324 [3:27:56<00:03, 2.26it/s][A
Iteration: 100%|█████████▉| 29317/29324 [3:27:56<00:02, 2.35it/s][A
Iteration: 100%|█████████▉| 29318/29324 [3:27:56<00:02, 2.44it/s][A
Iteration: 100%|█████████▉| 29319/29324 [3:27:57<00:02, 2.25it/s][A
Iteration: 100%|█████████▉| 29320/29324 [3:27:57<00:01, 2.35it/s][A
Iteration: 100%|█████████▉| 29321/29324 [3:27:58<00:01, 2.41it/s][A
Iteration: 100%|█████████▉| 29322/29324 [3:27:58<00:00, 2.25it/s][A
Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00, 2.36it/s][ATraceback (most recent call last):
File "code/run_squad.py", line 929, in <module>
main()
File "code/run_squad.py", line 862, in main
loss = model(input_ids, segment_ids, input_mask, start_positions, end_positions)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/0x0d4ff90d01fa4168983197b17d73bb0c_dependencies/code/modeling.py", line 467, in forward
start_loss = loss_fct(start_logits, start_positions)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 477, in __call__
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/loss.py", line 862, in forward
ignore_index=self.ignore_index, reduction=self.reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1550, in cross_entropy
return nll_loss(log_softmax(input, 1), target, weight, None, ignore_index, None, reduction)
File "/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py", line 1403, in nll_loss
if input.size(0) != target.size(0):
RuntimeError: dimension specified as 0 but tensor has no dimensions
Exception ignored in: <bound method tqdm.__del__ of Iteration: 100%|█████████▉| 29323/29324 [3:27:59<00:00, 2.36it/s]>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 931, in __del__
self.close()
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 1133, in close
self._decr_instances(self)
File "/usr/local/lib/python3.6/dist-packages/tqdm/_tqdm.py", line 496, in _decr_instances
cls.monitor.exit()
File "/usr/local/lib/python3.6/dist-packages/tqdm/_monitor.py", line 52, in exit
self.join()
File "/usr/lib/python3.6/threading.py", line 1053, in join
raise RuntimeError("cannot join current thread")
RuntimeError: cannot join current thread
``` | null | https://github.com/huggingface/transformers/pull/16310 | null | {'base_commit': '9fef668338b15e508bac99598dd139546fece00b', 'files': [{'path': 'tests/big_bird/test_modeling_big_bird.py', 'status': 'modified', 'Loc': {"('BigBirdModelTester', '__init__', 47)": {'mod': [73]}, "('BigBirdModelTest', 'test_fast_integration', 561)": {'mod': [584]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [
"tests/big_bird/test_modeling_big_bird.py"
],
"config": [],
"asset": []
} | 1 | |
psf | requests | ccabcf1fca906bfa6b65a3189c1c41061e6c1042 | https://github.com/psf/requests/issues/3698 | AttributeError: 'NoneType' object has no attribute 'read' | Hello :)
After a recent upgrade for our [coala](https://github.com/coala/coala) project to `requests` 2.12.1 we encounter an exception in our test suites which seems to be caused by `requests`.
Build: https://ci.appveyor.com/project/coala/coala-bears/build/1.0.3537/job/1wm7b4u9yhgkxkgn
Relevant part:
```
================================== FAILURES ===================================
_________________ InvalidLinkBearTest.test_redirect_threshold _________________
self = <tests.general.InvalidLinkBearTest.InvalidLinkBearTest testMethod=test_redirect_threshold>
def test_redirect_threshold(self):
long_url_redirect = """
https://bitbucket.org/api/301
https://bitbucket.org/api/302
""".splitlines()
short_url_redirect = """
http://httpbin.org/status/301
""".splitlines()
self.assertResult(valid_file=long_url_redirect,
invalid_file=short_url_redirect,
> settings={'follow_redirects': 'yeah'})
tests\general\InvalidLinkBearTest.py:157:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
tests\general\InvalidLinkBearTest.py:75: in assertResult
out = list(uut.run("valid", valid_file, **settings))
bears\general\InvalidLinkBear.py:80: in run
file, timeout, link_ignore_regex):
bears\general\InvalidLinkBear.py:53: in find_links_in_file
code = InvalidLinkBear.get_status_code(link, timeout)
bears\general\InvalidLinkBear.py:37: in get_status_code
timeout=timeout).status_code
C:\Python34\lib\site-packages\requests\api.py:96: in head
return request('head', url, **kwargs)
C:\Python34\lib\site-packages\requests\api.py:56: in request
return session.request(method=method, url=url, **kwargs)
C:\Python34\lib\site-packages\requests\sessions.py:488: in request
resp = self.send(prep, **send_kwargs)
C:\Python34\lib\site-packages\requests_mock\mocker.py:69: in _fake_send
return self._real_send(session, request, **kwargs)
C:\Python34\lib\site-packages\requests\sessions.py:641: in send
r.content
C:\Python34\lib\site-packages\requests\models.py:772: in content
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
def generate():
# Special case for urllib3.
if hasattr(self.raw, 'stream'):
try:
for chunk in self.raw.stream(chunk_size, decode_content=True):
yield chunk
except ProtocolError as e:
raise ChunkedEncodingError(e)
except DecodeError as e:
raise ContentDecodingError(e)
except ReadTimeoutError as e:
raise ConnectionError(e)
else:
# Standard file-like object.
while True:
> chunk = self.raw.read(chunk_size)
E AttributeError: 'NoneType' object has no attribute 'read'
C:\Python34\lib\site-packages\requests\models.py:705: AttributeError
```
happens on Windows and Linux.
Thanks in advance :) | null | https://github.com/psf/requests/pull/3718 | null | {'base_commit': 'ccabcf1fca906bfa6b65a3189c1c41061e6c1042', 'files': [{'path': 'requests/models.py', 'status': 'modified', 'Loc': {"('Response', 'content', 763)": {'mod': [772]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {"('TestRequests', None, 55)": {'add': [1096]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/models.py"
],
"doc": [],
"test": [
"tests/test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | fc805074be7b3b507bc1699e537f9b691c6f91b9 | https://github.com/AntonOsika/gpt-engineer/issues/674 | bug
documentation | ModuleNotFoundError: No module named 'tkinter' | **Bug description**
When running `gpt-engineer --improve` (using the recent version from PyPI), I get the following output:
```
$ gpt-engineer --improve
Traceback (most recent call last):
File "/home/.../.local/bin/gpt-engineer", line 5, in <module>
from gpt_engineer.main import app
File "/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/main.py", line 12, in <module>
from gpt_engineer.collect import collect_learnings
File "/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/collect.py", line 5, in <module>
from gpt_engineer import steps
File "/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/steps.py", line 19, in <module>
from gpt_engineer.file_selector import FILE_LIST_NAME, ask_for_files
File "/home/.../.local/pipx/venvs/gpt-engineer/lib/python3.10/site-packages/gpt_engineer/file_selector.py", line 4, in <module>
import tkinter as tk
ModuleNotFoundError: No module named 'tkinter'
```
**Expected behavior**
No error.
In https://github.com/AntonOsika/gpt-engineer/pull/465, no changes where made to the required packages, so tkinter might be added there. (Or made optional.)
EDIT: The error happens always, regardless of the command line parameter. | null | https://github.com/AntonOsika/gpt-engineer/pull/675 | null | {'base_commit': 'fc805074be7b3b507bc1699e537f9b691c6f91b9', 'files': [{'path': 'docs/installation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docs/installation.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
pallets | flask | 85dce2c836fe03aefc07b7f4e0aec575e170f1cd | https://github.com/pallets/flask/issues/593 | blueprints | Nestable blueprints | I'd like to be able to register "sub-blueprints" using `Blueprint.register_blueprint(*args, **kwargs)`. This would register the nested blueprints with an app when the "parent" is registered with it. All parameters are preserved, other than `url_prefix`, which is handled similarly to in `add_url_rule`. A naíve implementation could look like this:
``` python
class Blueprint(object):
...
def register_blueprint(self, blueprint, **options):
def deferred(state):
url_prefix = options.get('url_prefix')
if url_prefix is None:
url_prefix = blueprint.url_prefix
if 'url_prefix' in options:
del options['url_prefix']
state.app.register_blueprint(blueprint, url_prefix, **options)
self.record(deferred)
```
| null | https://github.com/pallets/flask/pull/3923 | null | {'base_commit': '85dce2c836fe03aefc07b7f4e0aec575e170f1cd', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, 71)': {'add': [71]}}}, {'path': 'docs/blueprints.rst', 'status': 'modified', 'Loc': {'(None, None, 122)': {'add': [122]}}}, {'path': 'src/flask/app.py', 'status': 'modified', 'Loc': {"('Flask', '__call__', 1982)": {'add': [1987]}, "('Flask', 'update_template_context', 712)": {'mod': [726, 727, 728]}, "('Flask', 'register_blueprint', 971)": {'mod': [990, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1004]}, "('Flask', '_find_error_handler', 1230)": {'mod': [1238, 1239, 1240, 1241, 1242, 1243, 1244]}, "('Flask', 'preprocess_request', 1741)": {'mod': [1752, 1755, 1756, 1761, 1762]}, "('Flask', 'process_response', 1768)": {'mod': [1782, 1784, 1785]}, "('Flask', 'do_teardown_request', 1794)": {'mod': [1818, 1819, 1820]}}}, {'path': 'src/flask/blueprints.py', 'status': 'modified', 'Loc': {"('BlueprintSetupState', '__init__', 16)": {'add': [47]}, "('Blueprint', '__init__', 141)": {'add': [170]}, "('Blueprint', 'register', 213)": {'add': [225], 'mod': [281, 282, 286, 287, 288, 289, 290, 291, 292, 293]}, "('BlueprintSetupState', 'add_url_rule', 53)": {'mod': [71]}, "('Blueprint', None, 78)": {'mod': [213]}}}, {'path': 'tests/test_blueprints.py', 'status': 'modified', 'Loc': {"(None, 'test_app_url_processors', 828)": {'add': [852]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/flask/blueprints.py",
"src/flask/app.py"
],
"doc": [
"docs/blueprints.rst",
"CHANGES.rst"
],
"test": [
"tests/test_blueprints.py"
],
"config": [],
"asset": []
} | null |
AUTOMATIC1111 | stable-diffusion-webui | f92d61497a426a19818625c3ccdaae9beeb82b31 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14263 | bug | [Bug]: KeyError: "do_not_save" when trying to save a prompt | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
When I try to save a prompt, it errors in the console saying
```
File "/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py", line 212, in save_styles
style_paths.remove("do_not_save")
KeyError: 'do_not_save'
```
and the file is not modified
I manually commented it out and it doesn't seem to break anything, except that it is saved to styles.csv.csv instead of styles.csv
### Steps to reproduce the problem
Try to save a prompt
### What should have happened?
Save into style.csv with no error
### Sysinfo
{
"Platform": "Linux-6.6.4-zen1-1-zen-x86_64-with-glibc2.38",
"Python": "3.11.4",
"Version": "v1.7.0-RC-5-gf92d6149",
"Commit": "f92d61497a426a19818625c3ccdaae9beeb82b31",
"Script path": "/home/ciel/stable-diffusion/stable-diffusion-webui",
"Data path": "/home/ciel/stable-diffusion/stable-diffusion-webui",
"Extensions dir": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions",
"Checksum": "e15aad6adb98a2a0ad13cad2b45b61b03565ef4f258783021da82b4ef7f37fa9",
"Commandline": [
"launch.py"
],
"Torch env info": {
"torch_version": "2.2.0",
"is_debug_build": "False",
"cuda_compiled_version": "N/A",
"gcc_version": "(GCC) 13.2.1 20230801",
"clang_version": "16.0.6",
"cmake_version": "version 3.26.4",
"os": "Arch Linux (x86_64)",
"libc_version": "glibc-2.38",
"python_version": "3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0] (64-bit runtime)",
"python_platform": "Linux-6.6.4-zen1-1-zen-x86_64-with-glibc2.38",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": null,
"nvidia_gpu_models": "AMD Radeon RX 7900 XTX (gfx1100)",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.23.5",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"pytorch-triton-rocm==2.1.0+dafe145982",
"torch==2.2.0.dev20231208+rocm5.6",
"torchdiffeq==0.2.3",
"torchmetrics==1.2.1",
"torchsde==0.2.6",
"torchvision==0.17.0.dev20231208+rocm5.6"
],
"conda_packages": [
"numpy 1.26.2 py311h24aa872_0 ",
"numpy-base 1.26.2 py311hbfb1bba_0 ",
"open-clip-torch 2.20.0 pypi_0 pypi",
"pytorch-lightning 1.9.4 pypi_0 pypi",
"pytorch-triton-rocm 2.1.0+dafe145982 pypi_0 pypi",
"torch 2.2.0.dev20231208+rocm5.7 pypi_0 pypi",
"torchaudio 2.2.0.dev20231208+rocm5.7 pypi_0 pypi",
"torchdiffeq 0.2.3 pypi_0 pypi",
"torchmetrics 1.2.1 pypi_0 pypi",
"torchsde 0.2.5 pypi_0 pypi",
"torchvision 0.17.0.dev20231208+rocm5.7 pypi_0 pypi"
],
"hip_compiled_version": "5.6.31061-8c743ae5d",
"hip_runtime_version": "5.6.31061",
"miopen_runtime_version": "2.20.0",
"caching_allocator_config": "",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture: x86_64",
"CPU op-mode(s): 32-bit, 64-bit",
"Address sizes: 48 bits physical, 48 bits virtual",
"Byte Order: Little Endian",
"CPU(s): 32",
"On-line CPU(s) list: 0-31",
"Vendor ID: AuthenticAMD",
"Model name: AMD Ryzen 9 5950X 16-Core Processor",
"CPU family: 25",
"Model: 33",
"Thread(s) per core: 2",
"Core(s) per socket: 16",
"Socket(s): 1",
"Stepping: 0",
"Frequency boost: disabled",
"CPU(s) scaling MHz: 49%",
"CPU max MHz: 6279.4922",
"CPU min MHz: 2200.0000",
"BogoMIPS: 8383.88",
"Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 erms invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local user_shstk clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca fsrm debug_swap",
"Virtualization: AMD-V",
"L1d cache: 512 KiB (16 instances)",
"L1i cache: 512 KiB (16 instances)",
"L2 cache: 8 MiB (16 instances)",
"L3 cache: 64 MiB (2 instances)",
"NUMA node(s): 1",
"NUMA node0 CPU(s): 0-31",
"Vulnerability Gather data sampling: Not affected",
"Vulnerability Itlb multihit: Not affected",
"Vulnerability L1tf: Not affected",
"Vulnerability Mds: Not affected",
"Vulnerability Meltdown: Not affected",
"Vulnerability Mmio stale data: Not affected",
"Vulnerability Retbleed: Not affected",
"Vulnerability Spec rstack overflow: Vulnerable: Safe RET, no microcode",
"Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl",
"Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization",
"Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected",
"Vulnerability Srbds: Not affected",
"Vulnerability Tsx async abort: Not affected"
]
},
"Exceptions": [],
"CPU": {
"model": "",
"count logical": 32,
"count physical": 16
},
"RAM": {
"total": "31GB",
"used": "6GB",
"free": "20GB",
"active": "7GB",
"inactive": "2GB",
"buffers": "172MB",
"cached": "5GB",
"shared": "199MB"
},
"Extensions": [
{
"name": "clip-interrogator-ext",
"path": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/clip-interrogator-ext",
"version": "0f1a4591",
"branch": "main",
"remote": "https://github.com/pharmapsychotic/clip-interrogator-ext.git"
},
{
"name": "latent-upscale",
"path": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/latent-upscale",
"version": "b9f75f44",
"branch": "main",
"remote": "https://github.com/feynlee/latent-upscale.git"
},
{
"name": "sd-webui-controlnet",
"path": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/sd-webui-controlnet",
"version": "feea1f65",
"branch": "main",
"remote": "https://github.com/Mikubill/sd-webui-controlnet.git"
},
{
"name": "ultimate-upscale-for-automatic1111",
"path": "/home/ciel/stable-diffusion/stable-diffusion-webui/extensions/ultimate-upscale-for-automatic1111",
"version": "728ffcec",
"branch": "master",
"remote": "https://github.com/Coyote-A/ultimate-upscale-for-automatic1111.git"
}
],
"Inactive extensions": [],
"Environment": {
"GIT": "git",
"GRADIO_ANALYTICS_ENABLED": "False",
"TORCH_COMMAND": "pip install --pre torch torchvision --index-url https://download.pytorch.org/whl/nightly/rocm5.6"
},
"Config": {
"samples_save": true,
"samples_format": "png",
"samples_filename_pattern": "",
"save_images_add_number": true,
"save_images_replace_action": "Replace",
"grid_save": true,
"grid_format": "png",
"grid_extended_filename": false,
"grid_only_if_multiple": true,
"grid_prevent_empty_spots": false,
"grid_zip_filename_pattern": "",
"n_rows": -1,
"font": "",
"grid_text_active_color": "#000000",
"grid_text_inactive_color": "#999999",
"grid_background_color": "#ffffff",
"save_images_before_face_restoration": false,
"save_images_before_highres_fix": false,
"save_images_before_color_correction": false,
"save_mask": false,
"save_mask_composite": false,
"jpeg_quality": 80,
"webp_lossless": false,
"export_for_4chan": true,
"img_downscale_threshold": 4.0,
"target_side_length": 4000,
"img_max_size_mp": 200,
"use_original_name_batch": true,
"use_upscaler_name_as_suffix": false,
"save_selected_only": true,
"save_init_img": false,
"temp_dir": "",
"clean_temp_dir_at_start": false,
"save_incomplete_images": false,
"notification_audio": true,
"notification_volume": 100,
"outdir_samples": "",
"outdir_txt2img_samples": "outputs/txt2img-images",
"outdir_img2img_samples": "outputs/img2img-images",
"outdir_extras_samples": "outputs/extras-images",
"outdir_grids": "",
"outdir_txt2img_grids": "outputs/txt2img-grids",
"outdir_img2img_grids": "outputs/img2img-grids",
"outdir_save": "log/images",
"outdir_init_images": "outputs/init-images",
"save_to_dirs": true,
"grid_save_to_dirs": true,
"use_save_to_dirs_for_ui": false,
"directories_filename_pattern": "[date]",
"directories_max_prompt_words": 8,
"ESRGAN_tile": 192,
"ESRGAN_tile_overlap": 8,
"realesrgan_enabled_models": [
"R-ESRGAN 4x+",
"R-ESRGAN 4x+ Anime6B"
],
"upscaler_for_img2img": null,
"face_restoration": false,
"face_restoration_model": "CodeFormer",
"code_former_weight": 0.5,
"face_restoration_unload": false,
"auto_launch_browser": "Local",
"enable_console_prompts": false,
"show_warnings": false,
"show_gradio_deprecation_warnings": true,
"memmon_poll_rate": 8,
"samples_log_stdout": false,
"multiple_tqdm": true,
"print_hypernet_extra": false,
"list_hidden_files": true,
"disable_mmap_load_safetensors": false,
"hide_ldm_prints": true,
"dump_stacks_on_signal": false,
"api_enable_requests": true,
"api_forbid_local_requests": true,
"api_useragent": "",
"unload_models_when_training": false,
"pin_memory": false,
"save_optimizer_state": false,
"save_training_settings_to_txt": true,
"dataset_filename_word_regex": "",
"dataset_filename_join_string": " ",
"training_image_repeats_per_epoch": 1,
"training_write_csv_every": 500,
"training_xattention_optimizations": false,
"training_enable_tensorboard": false,
"training_tensorboard_save_images": false,
"training_tensorboard_flush_every": 120,
"sd_model_checkpoint": "AOM3A1B_orangemixs.safetensors [5493a0ec49]",
"sd_checkpoints_limit": 1,
"sd_checkpoints_keep_in_cpu": true,
"sd_checkpoint_cache": 0,
"sd_unet": "Automatic",
"enable_quantization": false,
"enable_emphasis": true,
"enable_batch_seeds": true,
"comma_padding_backtrack": 20,
"CLIP_stop_at_last_layers": 1,
"upcast_attn": true,
"randn_source": "GPU",
"tiling": false,
"hires_fix_refiner_pass": "second pass",
"sdxl_crop_top": 0,
"sdxl_crop_left": 0,
"sdxl_refiner_low_aesthetic_score": 2.5,
"sdxl_refiner_high_aesthetic_score": 6.0,
"sd_vae_checkpoint_cache": 1,
"sd_vae": "orangemix.vae.pt",
"sd_vae_overrides_per_model_preferences": true,
"auto_vae_precision": true,
"sd_vae_encode_method": "Full",
"sd_vae_decode_method": "Full",
"inpainting_mask_weight": 1.0,
"initial_noise_multiplier": 1.0,
"img2img_extra_noise": 0.0,
"img2img_color_correction": false,
"img2img_fix_steps": false,
"img2img_background_color": "#ffffff",
"img2img_editor_height": 720,
"img2img_sketch_default_brush_color": "#ffffff",
"img2img_inpaint_mask_brush_color": "#ffffff",
"img2img_inpaint_sketch_default_brush_color": "#ffffff",
"return_mask": false,
"return_mask_composite": false,
"img2img_batch_show_results_limit": 32,
"cross_attention_optimization": "Automatic",
"s_min_uncond": 0.0,
"token_merging_ratio": 0.0,
"token_merging_ratio_img2img": 0.0,
"token_merging_ratio_hr": 0.0,
"pad_cond_uncond": false,
"persistent_cond_cache": true,
"batch_cond_uncond": true,
"use_old_emphasis_implementation": false,
"use_old_karras_scheduler_sigmas": false,
"no_dpmpp_sde_batch_determinism": false,
"use_old_hires_fix_width_height": false,
"dont_fix_second_order_samplers_schedule": false,
"hires_fix_use_firstpass_conds": false,
"use_old_scheduling": false,
"interrogate_keep_models_in_memory": false,
"interrogate_return_ranks": false,
"interrogate_clip_num_beams": 1,
"interrogate_clip_min_length": 24,
"interrogate_clip_max_length": 48,
"interrogate_clip_dict_limit": 1500,
"interrogate_clip_skip_categories": [],
"interrogate_deepbooru_score_threshold": 0.5,
"deepbooru_sort_alpha": true,
"deepbooru_use_spaces": true,
"deepbooru_escape": true,
"deepbooru_filter_tags": "",
"extra_networks_show_hidden_directories": true,
"extra_networks_dir_button_function": false,
"extra_networks_hidden_models": "When searched",
"extra_networks_default_multiplier": 1.0,
"extra_networks_card_width": 0,
"extra_networks_card_height": 0,
"extra_networks_card_text_scale": 1.0,
"extra_networks_card_show_desc": true,
"extra_networks_card_order_field": "Path",
"extra_networks_card_order": "Ascending",
"extra_networks_add_text_separator": " ",
"ui_extra_networks_tab_reorder": "",
"textual_inversion_print_at_load": false,
"textual_inversion_add_hashes_to_infotext": true,
"sd_hypernetwork": "None",
"keyedit_precision_attention": 0.1,
"keyedit_precision_extra": 0.05,
"keyedit_delimiters": ".,\\/!?%^*;:{}=`~() ",
"keyedit_delimiters_whitespace": [
"Tab",
"Carriage Return",
"Line Feed"
],
"disable_token_counters": false,
"return_grid": true,
"do_not_show_images": false,
"js_modal_lightbox": true,
"js_modal_lightbox_initially_zoomed": true,
"js_modal_lightbox_gamepad": false,
"js_modal_lightbox_gamepad_repeat": 250,
"gallery_height": "",
"compact_prompt_box": false,
"samplers_in_dropdown": true,
"dimensions_and_batch_together": true,
"sd_checkpoint_dropdown_use_short": false,
"hires_fix_show_sampler": false,
"hires_fix_show_prompts": false,
"txt2img_settings_accordion": false,
"img2img_settings_accordion": false,
"localization": "None",
"quicksettings_list": [
"sd_model_checkpoint"
],
"ui_tab_order": [],
"hidden_tabs": [],
"ui_reorder_list": [],
"gradio_theme": "Default",
"gradio_themes_cache": true,
"show_progress_in_title": true,
"send_seed": true,
"send_size": true,
"enable_pnginfo": true,
"save_txt": false,
"add_model_name_to_info": true,
"add_model_hash_to_info": true,
"add_vae_name_to_info": true,
"add_vae_hash_to_info": true,
"add_user_name_to_info": false,
"add_version_to_infotext": true,
"disable_weights_auto_swap": true,
"infotext_skip_pasting": [],
"infotext_styles": "Apply if any",
"show_progressbar": true,
"live_previews_enable": false,
"live_previews_image_format": "png",
"show_progress_grid": true,
"show_progress_every_n_steps": 5,
"show_progress_type": "Approx NN",
"live_preview_allow_lowvram_full": false,
"live_preview_content": "Prompt",
"live_preview_refresh_period": 300.0,
"live_preview_fast_interrupt": false,
"hide_samplers": [],
"eta_ddim": 0.0,
"eta_ancestral": 1.0,
"ddim_discretize": "uniform",
"s_churn": 0.0,
"s_tmin": 0.0,
"s_tmax": 0.0,
"s_noise": 1.0,
"k_sched_type": "Automatic",
"sigma_min": 0.0,
"sigma_max": 0.0,
"rho": 0.0,
"eta_noise_seed_delta": 0,
"always_discard_next_to_last_sigma": false,
"sgm_noise_multiplier": false,
"uni_pc_variant": "bh1",
"uni_pc_skip_type": "time_uniform",
"uni_pc_order": 3,
"uni_pc_lower_order_final": true,
"postprocessing_enable_in_main_ui": [],
"postprocessing_operation_order": [],
"upscaling_max_images_in_cache": 5,
"postprocessing_existing_caption_action": "Ignore",
"disabled_extensions": [],
"disable_all_extensions": "none",
"restore_config_state_file": "",
"sd_checkpoint_hash": "5493a0ec491f5961dbdc1c861404088a6ae9bd4007f6a3a7c5dee8789cdc1361",
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"control_net_detectedmap_dir": "detected_maps",
"control_net_models_path": "",
"control_net_modules_path": "",
"control_net_unit_count": 3,
"control_net_model_cache_size": 1,
"control_net_inpaint_blur_sigma": 7,
"control_net_no_high_res_fix": false,
"control_net_no_detectmap": false,
"control_net_detectmap_autosaving": false,
"control_net_allow_script_control": false,
"control_net_sync_field_args": true,
"controlnet_show_batch_images_in_ui": false,
"controlnet_increment_seed_during_batch": false,
"controlnet_disable_openpose_edit": false,
"controlnet_ignore_noninpaint_mask": false,
"lora_functional": false,
"sd_lora": "None",
"lora_preferred_name": "Alias from file",
"lora_add_hashes_to_infotext": true,
"lora_show_all": false,
"lora_hide_unknown_for_versions": [],
"lora_in_memory_limit": 0,
"extra_options_txt2img": [],
"extra_options_img2img": [],
"extra_options_cols": 1,
"extra_options_accordion": false,
"canvas_hotkey_zoom": "Alt",
"canvas_hotkey_adjust": "Ctrl",
"canvas_hotkey_move": "F",
"canvas_hotkey_fullscreen": "S",
"canvas_hotkey_reset": "R",
"canvas_hotkey_overlap": "O",
"canvas_show_tooltip": true,
"canvas_auto_expand": true,
"canvas_blur_prompt": false,
"canvas_disabled_functions": [
"Overlap"
]
},
"Startup": {
"total": 11.257086753845215,
"records": {
"initial startup": 0.02352619171142578,
"prepare environment/checks": 3.457069396972656e-05,
"prepare environment/git version info": 0.009780406951904297,
"prepare environment/torch GPU test": 2.7273693084716797,
"prepare environment/clone repositores": 0.038356781005859375,
"prepare environment/run extensions installers/sd-webui-controlnet": 0.14071893692016602,
"prepare environment/run extensions installers/ultimate-upscale-for-automatic1111": 2.288818359375e-05,
"prepare environment/run extensions installers/clip-interrogator-ext": 2.8869497776031494,
"prepare environment/run extensions installers/latent-upscale": 5.626678466796875e-05,
"prepare environment/run extensions installers": 3.0277533531188965,
"prepare environment": 5.820652484893799,
"launcher": 0.0008344650268554688,
"import torch": 2.0337331295013428,
"import gradio": 0.6256029605865479,
"setup paths": 0.9430902004241943,
"import ldm": 0.0025310516357421875,
"import sgm": 2.384185791015625e-06,
"initialize shared": 0.047745466232299805,
"other imports": 0.5719733238220215,
"opts onchange": 0.0002732276916503906,
"setup SD model": 0.0003185272216796875,
"setup codeformer": 0.07199668884277344,
"setup gfpgan": 0.009232521057128906,
"set samplers": 2.8371810913085938e-05,
"list extensions": 0.0010488033294677734,
"restore config state file": 5.4836273193359375e-06,
"list SD models": 0.004712820053100586,
"list localizations": 0.0001246929168701172,
"load scripts/custom_code.py": 0.001154184341430664,
"load scripts/img2imgalt.py": 0.0002789497375488281,
"load scripts/loopback.py": 0.0001888275146484375,
"load scripts/outpainting_mk_2.py": 0.0002484321594238281,
"load scripts/poor_mans_outpainting.py": 0.0001766681671142578,
"load scripts/postprocessing_caption.py": 0.0001506805419921875,
"load scripts/postprocessing_codeformer.py": 0.00015020370483398438,
"load scripts/postprocessing_create_flipped_copies.py": 0.00014519691467285156,
"load scripts/postprocessing_focal_crop.py": 0.00043463706970214844,
"load scripts/postprocessing_gfpgan.py": 0.00014495849609375,
"load scripts/postprocessing_split_oversized.py": 0.00015592575073242188,
"load scripts/postprocessing_upscale.py": 0.00021982192993164062,
"load scripts/processing_autosized_crop.py": 0.0001621246337890625,
"load scripts/prompt_matrix.py": 0.0001780986785888672,
"load scripts/prompts_from_file.py": 0.0001876354217529297,
"load scripts/sd_upscale.py": 0.00016450881958007812,
"load scripts/xyz_grid.py": 0.0010995864868164062,
"load scripts/ldsr_model.py": 0.11085081100463867,
"load scripts/lora_script.py": 0.05980086326599121,
"load scripts/scunet_model.py": 0.011086463928222656,
"load scripts/swinir_model.py": 0.010489225387573242,
"load scripts/hotkey_config.py": 0.0001678466796875,
"load scripts/extra_options_section.py": 0.00020551681518554688,
"load scripts/hypertile_script.py": 0.019654512405395508,
"load scripts/hypertile_xyz.py": 8.058547973632812e-05,
"load scripts/clip_interrogator_ext.py": 0.02592325210571289,
"load scripts/latent_upscale.py": 0.0007441043853759766,
"load scripts/adapter.py": 0.0003275871276855469,
"load scripts/api.py": 0.12074923515319824,
"load scripts/batch_hijack.py": 0.0005114078521728516,
"load scripts/cldm.py": 0.00022983551025390625,
"load scripts/controlmodel_ipadapter.py": 0.00032711029052734375,
"load scripts/controlnet.py": 0.0494229793548584,
"load scripts/controlnet_diffusers.py": 0.0001556873321533203,
"load scripts/controlnet_lllite.py": 0.0001430511474609375,
"load scripts/controlnet_lora.py": 0.00012731552124023438,
"load scripts/controlnet_model_guess.py": 0.00011944770812988281,
"load scripts/controlnet_version.py": 0.0001239776611328125,
"load scripts/enums.py": 0.0003447532653808594,
"load scripts/external_code.py": 6.246566772460938e-05,
"load scripts/global_state.py": 0.0003178119659423828,
"load scripts/hook.py": 0.0002903938293457031,
"load scripts/infotext.py": 9.560585021972656e-05,
"load scripts/logging.py": 0.00016260147094726562,
"load scripts/lvminthin.py": 0.0001952648162841797,
"load scripts/movie2movie.py": 0.00022029876708984375,
"load scripts/processor.py": 0.00023818016052246094,
"load scripts/utils.py": 0.00011324882507324219,
"load scripts/xyz_grid_support.py": 0.0003902912139892578,
"load scripts/ultimate-upscale.py": 0.00045228004455566406,
"load scripts/refiner.py": 0.00011444091796875,
"load scripts/seed.py": 0.00012302398681640625,
"load scripts": 0.41962695121765137,
"load upscalers": 0.001577138900756836,
"refresh VAE": 0.0006160736083984375,
"refresh textual inversion templates": 2.86102294921875e-05,
"scripts list_optimizers": 0.00027680397033691406,
"scripts list_unets": 4.76837158203125e-06,
"reload hypernetworks": 0.0027685165405273438,
"initialize extra networks": 0.004837512969970703,
"scripts before_ui_callback": 0.00041604042053222656,
"create ui": 0.4426920413970947,
"gradio launch": 0.23865938186645508,
"add APIs": 0.003912210464477539,
"app_started_callback/lora_script.py": 0.0001537799835205078,
"app_started_callback/clip_interrogator_ext.py": 0.0003566741943359375,
"app_started_callback/api.py": 0.0010819435119628906,
"app_started_callback": 0.001596689224243164
}
},
"Packages": [
"absl-py==2.0.0",
"accelerate==0.21.0",
"addict==2.4.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohttp==3.9.1",
"aiosignal==1.3.1",
"altair==5.2.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"attrs==23.1.0",
"basicsr==1.4.2",
"beautifulsoup4==4.12.2",
"blendmodes==2022",
"boltons==23.1.1",
"cachetools==5.3.2",
"certifi==2022.12.7",
"cffi==1.16.0",
"charset-normalizer==2.1.1",
"clean-fid==0.1.35",
"click==8.1.7",
"clip-interrogator==0.6.0",
"clip==1.0",
"contourpy==1.2.0",
"cssselect2==0.7.0",
"cycler==0.12.1",
"deprecation==2.1.0",
"einops==0.4.1",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.3.1",
"filelock==3.9.0",
"filterpy==1.4.5",
"flatbuffers==23.5.26",
"fonttools==4.46.0",
"frozenlist==1.4.0",
"fsspec==2023.12.1",
"ftfy==6.1.3",
"future==0.18.3",
"fvcore==0.1.5.post20221221",
"gdown==4.7.1",
"gfpgan==1.3.8",
"gitdb==4.0.11",
"gitpython==3.1.32",
"google-auth-oauthlib==1.1.0",
"google-auth==2.25.1",
"gradio-client==0.5.0",
"gradio==3.41.2",
"grpcio==1.60.0",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.19.4",
"idna==3.4",
"imageio==2.33.0",
"importlib-metadata==7.0.0",
"importlib-resources==6.1.1",
"inflection==0.5.1",
"iopath==0.1.9",
"jinja2==3.1.2",
"jsonmerge==1.8.0",
"jsonschema-specifications==2023.11.2",
"jsonschema==4.20.0",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy-loader==0.3",
"lightning-utilities==0.10.0",
"llvmlite==0.41.1",
"lmdb==1.4.1",
"lpips==0.1.4",
"lxml==4.9.3",
"markdown==3.5.1",
"markupsafe==2.1.3",
"matplotlib==3.8.2",
"mediapipe==0.10.8",
"mpmath==1.2.1",
"multidict==6.0.4",
"networkx==3.0rc1",
"numba==0.58.1",
"numpy==1.23.5",
"oauthlib==3.2.2",
"omegaconf==2.2.3",
"open-clip-torch==2.20.0",
"opencv-contrib-python==4.8.1.78",
"opencv-python==4.8.1.78",
"orjson==3.9.10",
"packaging==23.2",
"pandas==2.1.4",
"piexif==1.1.3",
"pillow==9.5.0",
"pip==23.1.2",
"platformdirs==4.1.0",
"portalocker==2.8.2",
"protobuf==3.20.0",
"psutil==5.9.5",
"pyasn1-modules==0.3.0",
"pyasn1==0.5.1",
"pycparser==2.21",
"pydantic==1.10.13",
"pydub==0.25.1",
"pyparsing==3.1.1",
"pysocks==1.7.1",
"python-dateutil==2.8.2",
"python-multipart==0.0.6",
"pytorch-lightning==1.9.4",
"pytorch-triton-rocm==2.1.0+dafe145982",
"pytz==2023.3.post1",
"pywavelets==1.5.0",
"pyyaml==6.0.1",
"realesrgan==0.3.0",
"referencing==0.32.0",
"regex==2023.10.3",
"reportlab==4.0.7",
"requests-oauthlib==1.3.1",
"requests==2.28.1",
"resize-right==0.0.2",
"rpds-py==0.13.2",
"rsa==4.9",
"safetensors==0.3.1",
"scikit-image==0.21.0",
"scipy==1.11.4",
"semantic-version==2.10.0",
"sentencepiece==0.1.99",
"setuptools==65.5.0",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.0",
"sounddevice==0.4.6",
"soupsieve==2.5",
"starlette==0.26.1",
"svglib==1.5.1",
"sympy==1.11.1",
"tabulate==0.9.0",
"tb-nightly==2.16.0a20231208",
"tensorboard-data-server==0.7.2",
"termcolor==2.4.0",
"tf-keras-nightly==2.16.0.dev2023120810",
"tifffile==2023.9.26",
"timm==0.9.2",
"tinycss2==1.2.1",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"tomli==2.0.1",
"toolz==0.12.0",
"torch==2.2.0.dev20231208+rocm5.6",
"torchdiffeq==0.2.3",
"torchmetrics==1.2.1",
"torchsde==0.2.6",
"torchvision==0.17.0.dev20231208+rocm5.6",
"tqdm==4.66.1",
"trampoline==0.1.2",
"transformers==4.30.2",
"typing-extensions==4.8.0",
"tzdata==2023.3",
"urllib3==1.26.13",
"uvicorn==0.24.0.post1",
"wcwidth==0.2.12",
"webencodings==0.5.1",
"websockets==11.0.3",
"werkzeug==3.0.1",
"yacs==0.1.8",
"yapf==0.40.2",
"yarl==1.9.4",
"zipp==3.17.0"
]
}
### What browsers do you use to access the UI ?
Mozilla Firefox
### Console logs
```Shell
❯ ./webui.sh (base)
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on ciel user
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Using TCMalloc: libtcmalloc_minimal.so.4
Python 3.11.4 (main, Jul 5 2023, 13:45:01) [GCC 11.2.0]
Version: v1.7.0-RC-5-gf92d6149
Commit hash: f92d61497a426a19818625c3ccdaae9beeb82b31
Launching Web UI with arguments:
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
2023-12-09 17:08:09,876 - ControlNet - INFO - ControlNet v1.1.422
ControlNet preprocessor location: /home/ciel/stable-diffusion/stable-diffusion-webui/extensions/sd-webui-controlnet/annotator/downloads
2023-12-09 17:08:09,921 - ControlNet - INFO - ControlNet v1.1.422
Loading weights [5493a0ec49] from /home/ciel/stable-diffusion/stable-diffusion-webui/models/Stable-diffusion/AOM3A1B_orangemixs.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Creating model from config: /home/ciel/stable-diffusion/stable-diffusion-webui/configs/v1-inference.yaml
Startup time: 8.9s (prepare environment: 4.0s, import torch: 2.0s, import gradio: 0.5s, setup paths: 0.8s, other imports: 0.5s, load scripts: 0.4s, create ui: 0.4s, gradio launch: 0.2s).
Loading VAE weights specified in settings: /home/ciel/stable-diffusion/stable-diffusion-webui/models/VAE/orangemix.vae.pt
Applying attention optimization: Doggettx... done.
Model loaded in 2.6s (load weights from disk: 0.6s, create model: 0.2s, apply weights to model: 1.4s, load VAE: 0.2s, calculate empty prompt: 0.1s).
Traceback (most recent call last):
File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/routes.py", line 488, in run_predict
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1431, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/blocks.py", line 1103, in call_function
prediction = await anyio.to_thread.run_sync(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/anyio/_backends/_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ciel/stable-diffusion/stable-diffusion-webui/venv/lib/python3.11/site-packages/gradio/utils.py", line 707, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/ciel/stable-diffusion/stable-diffusion-webui/modules/ui_prompt_styles.py", line 27, in save_style
shared.prompt_styles.save_styles(shared.styles_filename)
File "/home/ciel/stable-diffusion/stable-diffusion-webui/modules/styles.py", line 212, in save_styles
style_paths.remove("do_not_save")
KeyError: 'do_not_save'
```
### Additional information
I'm running dev branch due to the Navi3 bug, checking out master after launch seems to result in the same issue, but it could have just been jit-ed, didn't test very in-depth | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14276 | null | {'base_commit': 'f92d61497a426a19818625c3ccdaae9beeb82b31', 'files': [{'path': 'modules/styles.py', 'status': 'modified', 'Loc': {"('StyleDatabase', '__init__', 95)": {'mod': [101, 102, 103, 104]}, "('StyleDatabase', None, 94)": {'mod': [158, 159, 160, 161]}, "('StyleDatabase', 'get_style_paths', 158)": {'mod': [175, 177]}, "('StyleDatabase', 'save_styles', 195)": {'mod': [199, 200, 201, 202, 204, 205, 206, 207, 208, 209, 211, 212]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"modules/styles.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
home-assistant | core | c3e9c1a7e8fdc949b8e638d79ab476507ff92f18 | https://github.com/home-assistant/core/issues/60067 | integration: environment_canada
by-code-owner | Environment Canada (EC) radar integration slowing Environment Canada servers | ### The problem
The `config_flow` change to the EC integration did not change the way the underlying radar retrieval works, but did enable radar for everyone. As a result the EC servers are getting far too many requests. We (the codeowners) have been working with EC to diagnose this issue and understand their concerns.
We are doing two things (PR is in progress). Caching requests to the EC servers. Work so far shows that through caching we can reduce the number of requests by over 90%. This fix is in the integration dependency library.
Second, we are creating the radar (camera) entity with `_attr_entity_registry_enabled_default = False` so that new radar entities are disabled by default. Many people use the integration for forecast only.
Last, EC is putting a policy in place such that User Agent needs to be filled in to represent the calling library.
### What version of Home Assistant Core has the issue?
2021.12.0.dev0
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Core
### Integration causing the issue
Environment Canada
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/environment_canada/
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
Quote from one of the email exchanges with EC:
> What we observed is 1350 unique IP addresses using this code which made 23.5 million requests over 5 days.
In order to respond to EC as quickly as possible we are asking for consideration to release the PR, when available, in the next dot release. | null | https://github.com/home-assistant/core/pull/60087 | null | {'base_commit': 'c3e9c1a7e8fdc949b8e638d79ab476507ff92f18', 'files': [{'path': 'homeassistant/components/environment_canada/camera.py', 'status': 'modified', 'Loc': {"('ECCamera', '__init__', 49)": {'add': [57]}}}, {'path': 'homeassistant/components/environment_canada/manifest.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'requirements_all.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [603]}}}, {'path': 'requirements_test_all.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [372]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"homeassistant/components/environment_canada/camera.py",
"homeassistant/components/environment_canada/manifest.json"
],
"doc": [],
"test": [],
"config": [
"requirements_all.txt",
"requirements_test_all.txt"
],
"asset": []
} | 1 |
abi | screenshot-to-code | 939539611f0cad12056f7be78ef6b2128b90b779 | https://github.com/abi/screenshot-to-code/issues/336 | bug
p2 | Handle Nones in chunk.choices[0].delta | 
There is a successful request for the openai interface, but it seems that no code is generated.
backend-1 | ERROR: Exception in ASGI application
backend-1 | Traceback (most recent call last):
backend-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/websockets/websockets_impl.py", line 250, in run_asgi
backend-1 | result = await self.app(self.scope, self.asgi_receive, self.asgi_send)
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 84, in __call__
backend-1 | return await self.app(scope, receive, send)
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 276, in __call__
backend-1 | await super().__call__(scope, receive, send)
backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 122, in __call__
backend-1 | await self.middleware_stack(scope, receive, send)
backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 149, in __call__
backend-1 | await self.app(scope, receive, send)
backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/cors.py", line 75, in __call__
backend-1 | await self.app(scope, receive, send)
backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 79, in __call__
backend-1 | raise exc
backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 68, in __call__
backend-1 | await self.app(scope, receive, sender)
backend-1 | File "/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 21, in __call__
backend-1 | raise e
backend-1 | File "/usr/local/lib/python3.12/site-packages/fastapi/middleware/asyncexitstack.py", line 18, in __call__
backend-1 | await self.app(scope, receive, send)
backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 718, in __call__
backend-1 | await route.handle(scope, receive, send)
backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 341, in handle
backend-1 | await self.app(scope, receive, send)
backend-1 | File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 82, in app
backend-1 | await func(session)
backend-1 | File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 289, in app
backend-1 | await dependant.call(**values)
backend-1 | File "/app/routes/generate_code.py", line 251, in stream_code
backend-1 | completion = await stream_openai_response(
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | File "/app/llm.py", line 62, in stream_openai_response
backend-1 | content = chunk.choices[0].delta.content or ""
backend-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
backend-1 | AttributeError: 'NoneType' object has no attribute 'content'
backend-1 | INFO: connection closed
| null | https://github.com/abi/screenshot-to-code/pull/341 | null | {'base_commit': '939539611f0cad12056f7be78ef6b2128b90b779', 'files': [{'path': 'backend/llm.py', 'status': 'modified', 'Loc': {"(None, 'stream_openai_response', 32)": {'mod': [62, 63, 64]}}}, {'path': 'frontend/package.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49]}}}, {'path': 'frontend/src/App.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [381]}}}, {'path': 'frontend/yarn.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5644, 5939]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"backend/llm.py",
"frontend/src/App.tsx",
"frontend/package.json"
],
"doc": [],
"test": [],
"config": [
"frontend/yarn.lock"
],
"asset": []
} | 1 |
Significant-Gravitas | AutoGPT | bf895eb656dee9084273cd36395828bd06aa231d | https://github.com/Significant-Gravitas/AutoGPT/issues/6 | enhancement
good first issue
API costs | Make Auto-GPT aware of it's running cost | Auto-GPT is expensive to run due to GPT-4's API cost.
We could experiment with making it aware of this fact, by tracking tokens as they are used and converting to a dollar cost.
This could also be displayed to the user to help them be more aware of exactly how much they are spending. | null | https://github.com/Significant-Gravitas/AutoGPT/pull/762 | null | {'base_commit': 'bf895eb656dee9084273cd36395828bd06aa231d', 'files': [{'path': 'autogpt/chat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'chat_with_ai', 54)": {'add': [135]}}}, {'path': 'autogpt/config/ai_config.py', 'status': 'modified', 'Loc': {"('AIConfig', None, 21)": {'add': [28]}, "('AIConfig', '__init__', 31)": {'add': [40, 48], 'mod': [32]}, "('AIConfig', 'load', 53)": {'add': [75], 'mod': [55, 77]}, "('AIConfig', 'save', 79)": {'add': [94]}, "('AIConfig', 'construct_full_prompt', 99)": {'add': [149], 'mod': [110]}}}, {'path': 'autogpt/llm_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}, "(None, 'create_chat_completion', 56)": {'mod': [99, 107]}, "(None, 'create_embedding_with_ada', 156)": {'mod': [162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172]}}}, {'path': 'autogpt/memory/base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'get_ada_embedding', 11)": {'mod': [13, 14, 15, 16, 17, 18, 19, 20, 21]}}}, {'path': 'autogpt/prompts/prompt.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'construct_main_ai_config', 78)": {'add': [88, 100, 109]}}}, {'path': 'autogpt/setup.py', 'status': 'modified', 'Loc': {"(None, 'generate_aiconfig_automatic', 139)": {'add': [194], 'mod': [196]}, "(None, 'generate_aiconfig_manual', 70)": {'mod': [136]}}}, {'path': 'tests/unit/test_commands.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 10]}, "(None, 'test_make_agent', 11)": {'mod': [17, 20]}}}, {'path': 'tests/unit/test_setup.py', 'status': 'modified', 'Loc': {"('TestAutoGPT', 'test_generate_aiconfig_automatic_fallback', 39)": {'add': [46]}, "('TestAutoGPT', 'test_prompt_user_manual_mode', 57)": {'add': [64]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"autogpt/chat.py",
"autogpt/prompts/prompt.py",
"autogpt/config/ai_config.py",
"autogpt/memory/base.py",
"autogpt/setup.py",
"autogpt/llm_utils.py"
],
"doc": [],
"test": [
"tests/unit/test_commands.py",
"tests/unit/test_setup.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 3e01ce744a981d8f19ae77ec695005e7000f4703 | https://github.com/yt-dlp/yt-dlp/issues/5855 | bug | Generic extractor can crash if Brotli is not available | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Testing #5851 in a configuration where no Brotli decoder was available showed the crash in the log.
The problem is this extractor code:
https://github.com/yt-dlp/yt-dlp/blob/1fc089143c79b02b8373ae1d785d5e3a68635d4d/yt_dlp/extractor/generic.py#L2306-L2318
Normally there is a check for a supported Brotli encoder (using `SUPPORTED_ENCODINGS`). Specifying `*` in the `Accept-encoding` header bypasses that check.
However, I don't think that `*` does what is wanted according to the comments in the above code. The code wants to get the resource with no decoding (because decoding in yt-dl[p] starts by reading the entire response), but `*` still allows the server to send a compressed response. What is wanted is the `identity` encoding which is the default if no other encoding is specified. Or, to re-cast the decoding process so that the whole response stream is not read before decoding, but that means creating stream decoding methods for Brotli and zlib.
Also, there could be a check for a supported encoding in `YoutubeDLHandler.http_response()`, perhaps synthesizing 416 or 406 id the server has sent an encoding that isn't supported, instead of the crash seen here.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '-F', 'https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.11.11 [8b644025b] (source)
[debug] Lazy loading extractors is disabled
[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']
[debug] Git HEAD: c73355510
[debug] Python 3.9.15 (CPython i686 32bit) - Linux-4.4.0-210-generic-i686-with-glibc2.23 (OpenSSL 1.1.1s 1 Nov 2022, glibc 2.23)
[debug] exe versions: ffmpeg 4.3, ffprobe 4.3
[debug] Optional libraries: Cryptodome-3.11.0, certifi-2019.11.28, secretstorage-3.2.0, sqlite3-2.6.0
[debug] Proxy map: {}
[debug] Loaded 1735 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.11.11, Current version: 2022.11.11
yt-dlp is up to date (2022.11.11)
[generic] Extracting URL: https://www.extra.cz/cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867
[generic] cauky-lidi-70-dil-babis-predstavil-pohadky-prymulanek-nebo-andrejovy-nove-saty-ac867: Downloading webpage
ERROR: 'NoneType' object has no attribute 'decompress'
Traceback (most recent call last):
File "/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py", line 1495, in wrapper
return func(self, *args, **kwargs)
File "/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py", line 1571, in __extract_info
ie_result = ie.extract(url)
File "/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py", line 680, in extract
ie_result = self._real_extract(url)
File "/home/df/Documents/src/yt-dlp/yt_dlp/extractor/generic.py", line 2314, in _real_extract
full_response = self._request_webpage(url, video_id, headers={
File "/home/df/Documents/src/yt-dlp/yt_dlp/extractor/common.py", line 807, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
File "/home/df/Documents/src/yt-dlp/yt_dlp/YoutubeDL.py", line 3719, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python3.9/urllib/request.py", line 523, in open
response = meth(req, response)
File "/home/df/Documents/src/yt-dlp/yt_dlp/utils.py", line 1452, in http_response
io.BytesIO(self.brotli(resp.read())), old_resp.headers, old_resp.url, old_resp.code)
File "/home/df/Documents/src/yt-dlp/yt_dlp/utils.py", line 1389, in brotli
return brotli.decompress(data)
AttributeError: 'NoneType' object has no attribute 'decompress'
```
| null | null | https://github.com/yt-dlp/yt-dlp/commit/3e01ce744a981d8f19ae77ec695005e7000f4703 | {'base_commit': '3e01ce744a981d8f19ae77ec695005e7000f4703', 'files': [{'path': 'yt_dlp/extractor/generic.py', 'status': 'modified', 'Loc': {"('GenericIE', None, 42)": {'add': [2156]}, "('GenericIE', '_real_extract', 2276)": {'mod': [2315]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"yt_dlp/extractor/generic.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
CorentinJ | Real-Time-Voice-Cloning | ded7b37234e229d9bde0a9a506f7c65605803731 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/543 | Lack of pre-compiled results in lost interest | so I know the first thing people are going to say is, this isn't an issue. However, it is. by not having a precompiled version to download over half the people that find their way to this GitHub are going to lose interest. Honestly, I'm one of them. I attempted to compile it but then I saw that I had to track down each module for this, yeah quickly drove me away from it. all I wanted to do was mess around and see what it can do. even if the results arent mind-blowing the concept interests me. but due to not having a ready to use executable I like many others I'm sure of, have decided it isn't even worth messing with. | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/546 | null | {'base_commit': 'ded7b37234e229d9bde0a9a506f7c65605803731', 'files': [{'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [11]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"toolbox/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 96b5814de70ad2435b6db5f49b607b136921f701 | https://github.com/scikit-learn/scikit-learn/issues/26948 | Documentation | The copy button on install copies an extensive comman including env activation | ### Describe the issue linked to the documentation
https://scikit-learn.org/stable/install.html
Above link will lead you to the sklearn downlanding for link .
when you link copy link button it will copy
`python3 -m venv sklearn-venvpython -m venv sklearn-venvpython -m venv sklearn-venvsource sklearn-venv/bin/activatesource sklearn-venv/bin/activatesklearn-venv\Scripts\activatepip install -U scikit-learnpip install -U scikit-learnpip install -U scikit-learnpip3 install -U scikit-learnconda create -n sklearn-env -c conda-forge scikit-learnconda activate sklearn-env`
instead of `pip3 install -U scikit-learn`
if this is the issue so please issue i want to create a pull request for it and tell in which file this issue reside
Thanks
### Suggest a potential alternative/fix
By resoving above issue | null | https://github.com/scikit-learn/scikit-learn/pull/27052 | null | {'base_commit': '96b5814de70ad2435b6db5f49b607b136921f701', 'files': [{'path': 'doc/install.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107]}}}, {'path': 'doc/themes/scikit-learn-modern/static/css/theme.css', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1216, 1220, 1225, 1233, 1236, 1239, 1243, 1247], 'mod': [1208, 1209]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"doc/themes/scikit-learn-modern/static/css/theme.css"
],
"doc": [
"doc/install.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
keras-team | keras | 49b9682b3570211c7d8f619f8538c08fd5d8bdad | https://github.com/keras-team/keras/issues/10036 | [API DESIGN REVIEW] sample weight in ImageDataGenerator.flow | https://docs.google.com/document/d/14anankKROhliJCpInQH-pITatdjO9UzSN6Iz0MwcDHw/edit?usp=sharing
Makes it easy to use data augmentation when sample weights are available. | null | https://github.com/keras-team/keras/pull/10092 | null | {'base_commit': '49b9682b3570211c7d8f619f8538c08fd5d8bdad', 'files': [{'path': 'keras/preprocessing/image.py', 'status': 'modified', 'Loc': {"('ImageDataGenerator', 'flow', 715)": {'add': [734, 759], 'mod': [754]}, "('NumpyArrayIterator', None, 1188)": {'add': [1201]}, "('NumpyArrayIterator', '__init__', 1216)": {'add': [1241, 1278], 'mod': [1217, 1218]}, "('NumpyArrayIterator', '_get_batches_of_transformed_samples', 1289)": {'add': [1313]}, "('ImageDataGenerator', None, 443)": {'mod': [715]}}}, {'path': 'tests/keras/preprocessing/image_test.py', 'status': 'modified', 'Loc': {"('TestImage', 'test_image_data_generator', 32)": {'add': [64]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"tests/keras/preprocessing/image_test.py",
"keras/preprocessing/image.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | efb53aafdcaae058962c6189ddecb3dc62b02c31 | https://github.com/scrapy/scrapy/issues/6514 | enhancement | Migrate from setup.py to pyproject.toml | We should migrate to the modern declarative setuptools metadata approach as discussed in https://setuptools.pypa.io/en/latest/userguide/quickstart.html and https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html, but only after the 2.12 release. | null | https://github.com/scrapy/scrapy/pull/6547 | null | {'base_commit': 'efb53aafdcaae058962c6189ddecb3dc62b02c31', 'files': [{'path': '.bandit.yml', 'status': 'removed', 'Loc': {}}, {'path': '.bumpversion.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.coveragerc', 'status': 'removed', 'Loc': {}}, {'path': '.isort.cfg', 'status': 'removed', 'Loc': {}}, {'path': '.pre-commit-config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}, {'path': 'MANIFEST.in', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [13]}}}, {'path': 'pylintrc', 'status': 'removed', 'Loc': {}}, {'path': 'pytest.ini', 'status': 'removed', 'Loc': {}}, {'path': 'setup.cfg', 'status': 'removed', 'Loc': {}}, {'path': 'setup.py', 'status': 'removed', 'Loc': {}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {"('CrawlerProcessSubprocess', 'test_shutdown_forced', 890)": {'mod': [902]}}}, {'path': 'tests/test_spiderloader/__init__.py', 'status': 'modified', 'Loc': {"('SpiderLoaderTest', 'test_syntax_error_warning', 146)": {'mod': [147, 148, 149]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [82]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"tests/test_spiderloader/__init__.py",
".isort.cfg",
".coveragerc",
"setup.cfg",
"setup.py",
".bumpversion.cfg"
],
"doc": [],
"test": [
"tests/test_crawler.py"
],
"config": [
"pytest.ini",
".pre-commit-config.yaml",
"tox.ini",
"pylintrc",
".bandit.yml",
"MANIFEST.in"
],
"asset": []
} | 1 |
fastapi | fastapi | c6e950dc9cacefd692dbd8987a3acd12a44b506f | https://github.com/fastapi/fastapi/issues/5859 | question
question-migrate | FastAPI==0.89.0 Cannot use `None` as a return type when `status_code` is set to 204 with `from __future__ import annotations` | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the FastAPI documentation, with the integrated search.
- [X] I already searched in Google "How to X in FastAPI" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
- [X] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from __future__ import annotations
from fastapi import FastAPI
app = FastAPI()
@app.get("/", status_code=204)
def read_root() -> None:
return {"Hello": "World"}
```
### Description
If we add:
`from __future__ import annotations`
It changes the annotations structure and the response model is `NoneType` instead of `None`, which causes validation of the `statuc_code` vs `response_model` and raises an exception.
```python
...
File ".../site-packages/fastapi/routing.py", line 635, in decorator
self.add_api_route(
File ".../site-packages/fastapi/routing.py", line 574, in add_api_route
route = route_class(
File ".../site-packages/fastapi/routing.py", line 398, in __init__
assert is_body_allowed_for_status_code(
AssertionError: Status code 204 must not have a response body
```
I am working on a fix for it right now.
### Operating System
macOS
### Operating System Details
_No response_
### FastAPI Version
0.89.0
### Python Version
3.10
### Additional Context
_No response_ | null | https://github.com/fastapi/fastapi/pull/2246 | null | {'base_commit': 'c6e950dc9cacefd692dbd8987a3acd12a44b506f', 'files': [{'path': '.github/workflows/preview-docs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
".github/workflows/preview-docs.yml"
],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 3938f81c1b4a5ee81d5bfc6563c17a225f7e5068 | https://github.com/3b1b/manim/issues/1330 | Error after installing manim | I installed all manim & dependecies, but when I ran `python -m manim example_scenes.py OpeningManimExample`, I got the following error:
`Traceback (most recent call last):
File "c:\users\jm\anaconda3\lib\runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\jm\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\jm\Documents\work\manim_new\manim\manim.py", line 5, in <module>
manimlib.main()
File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\__init__.py", line 9, in main
scenes = manimlib.extract_scene.main(config)
File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\extract_scene.py", line 113, in main
scenes = get_scenes_to_render(all_scene_classes, scene_config, config)
File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\extract_scene.py", line 74, in get_scenes_to_render
scene = scene_class(**scene_config)
File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\scene\scene.py", line 44, in __init__
self.window = Window(self, **self.window_config)
File "C:\Users\jm\Documents\work\manim_new\manim\manimlib\window.py", line 19, in __init__
super().__init__(**kwargs)
File "C:\Users\jm\Envs\manim.new\lib\site-packages\moderngl_window\context\pyglet\window.py", line 51, in __init__
self._window = PygletWrapper(
File "C:\Users\jm\Envs\manim.new\lib\site-packages\pyglet\window\win32\__init__.py", line 134, in __init__
super(Win32Window, self).__init__(*args, **kwargs)
File "C:\Users\jm\Envs\manim.new\lib\site-packages\pyglet\window\__init__.py", line 603, in __init__
config = screen.get_best_config(config)
File "C:\Users\jm\Envs\manim.new\lib\site-packages\pyglet\canvas\base.py", line 194, in get_best_config
raise window.NoSuchConfigException()
pyglet.window.NoSuchConfigException`.
Any advice? And thank you | null | https://github.com/3b1b/manim/pull/1343 | null | {'base_commit': '3938f81c1b4a5ee81d5bfc6563c17a225f7e5068', 'files': [{'path': 'manimlib/window.py', 'status': 'modified', 'Loc': {"('Window', None, 10)": {'mod': [15]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"manimlib/window.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
keras-team | keras | 84b283e6200bcb051ed976782fbb2b123bf9b8fc | https://github.com/keras-team/keras/issues/19793 | type:bug/performance | model.keras format much slower to load | Anyone experiencing unreasonably slow load times when loading a keras-format saved model? I have noticed this repeated when working in ipython, where simply instantiating a model via `Model.from_config` then calling `model.load_weights` is much (several factors) faster than loading a `model.keras` file.
My understanding is the keras format is simply a zip file with the config.json file and weights h5 (iirc) but weirdly enough, there's something not right going on while loading. | null | https://github.com/keras-team/keras/pull/19852 | null | {'base_commit': '84b283e6200bcb051ed976782fbb2b123bf9b8fc', 'files': [{'path': 'keras/src/saving/saving_lib.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 34]}, "(None, '_save_model_to_fileobj', 95)": {'mod': [112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 127, 128, 129, 130, 131, 132, 133, 134, 135]}, "(None, '_load_model_from_fileobj', 157)": {'mod': [175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 186, 187, 188, 189, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204]}, "(None, 'load_weights_only', 239)": {'mod': [253, 254, 255]}}}, {'path': 'keras/src/saving/saving_lib_test.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [614]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"keras/src/saving/saving_lib_test.py",
"keras/src/saving/saving_lib.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 4cdb266dac852859f695b0555cbe49e58343e69a | https://github.com/ansible/ansible/issues/3539 | bug | Bug in Conditional Include | Hi,
I know that when using conditionals on an include, 'All the tasks get evaluated, but the conditional is applied to each and every task'. However this breaks when some of that tasks register variables and other tasks in the group use those variable.
Example:
main.yml:
```
- include: extra.yml
when: do_extra is defined
```
extra.yml:
```
- name: check if we can do task A
shell: check_if_task_A_possible
register: A_possible
ignore_errors: yes
- name: task A
shell: run_task_A
when: A_possible.rc == 0
```
Now if you run main.yml and 'do_extra' is not defined, the run will fail on 'task A' because when the 'when' condition is evaluated, the variable A_possible will not exist.
It is not sufficient to just add the top-level include conditional above the other because right now it looks like the two conditions are compounded and tested together which will still fail because A_possible is not defined. I think you would have to run the file level conditional before the task level ones to keep this from happening.
| null | https://github.com/ansible/ansible/pull/20158 | null | {'base_commit': '4cdb266dac852859f695b0555cbe49e58343e69a', 'files': [{'path': 'lib/ansible/modules/windows/win_robocopy.ps1', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25, 26, 27, 28, 73, 76, 93, 94, 95, 114, 115, 167, 168]}}}, {'path': 'lib/ansible/modules/windows/win_robocopy.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [132]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/windows/win_robocopy.ps1",
"lib/ansible/modules/windows/win_robocopy.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
psf | requests | f5dacf84468ab7e0631cc61a3f1431a32e3e143c | https://github.com/psf/requests/issues/2654 | Feature Request
Contributor Friendly | utils.get_netrc_auth silently fails when netrc exists but fails to parse | My .netrc contains a line for the github auth, [like this](https://gist.github.com/wikimatze/9790374).
It turns out that `netrc.netrc()` doesn't like that:
```
>>> from netrc import netrc
>>> netrc()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py", line 35, in __init__
self._parse(file, fp, default_netrc)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/netrc.py", line 117, in _parse
file, lexer.lineno)
netrc.NetrcParseError: bad follower token 'protocol' (/Users/david/.netrc, line 9)
```
`get_netrc_auth` catches the `NetrcParseError` [but just ignores it](https://github.com/kennethreitz/requests/blob/master/requests/utils.py#L106).
At least having it emit a warning would have saved some hair-pulling.
| null | https://github.com/psf/requests/pull/2656 | null | {'base_commit': 'f5dacf84468ab7e0631cc61a3f1431a32e3e143c', 'files': [{'path': 'requests/utils.py', 'status': 'modified', 'Loc': {"(None, 'get_netrc_auth', 70)": {'mod': [70, 108, 109]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
oobabooga | text-generation-webui | 0877741b0350d200be7f1e6cca2780a25ee29cd0 | https://github.com/oobabooga/text-generation-webui/issues/5851 | bug | Inference failing using ExLlamav2 version 0.0.18 | ### Describe the bug
Since ExLlamav2 was upgraded to version 0.0.18 in the requirements.txt, inference using it is no longer working and fails with the error in the logs below. Reverting to version 0.0.17 resolves the issue.
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
1. Install latest main branch (current commit is `26d822f64f2a029306b250b69dc58468662a4fc6`)
2. Download `GPTQ` model
3. Use `ExLlamav2_HF` model loader
4. Go to `Chat` tab and ask the AI a question.
5. Observe error, even though the model loaded successfully.
### Screenshot
_No response_
### Logs
```shell
21:35:11-061459 INFO Loading "TheBloke_dolphin-2.6-mistral-7B-GPTQ"
21:35:13-842112 INFO LOADER: "ExLlamav2"
21:35:13-843422 INFO TRUNCATION LENGTH: 32768
21:35:13-844234 INFO INSTRUCTION TEMPLATE: "Alpaca"
21:35:13-845014 INFO Loaded the model in 2.78 seconds.
Traceback (most recent call last):
File "/workspace/text-generation-webui/modules/text_generation.py", line 429, in generate_reply_custom
for reply in shared.model.generate_with_streaming(question, state):
File "/workspace/text-generation-webui/modules/exllamav2.py", line 140, in generate_with_streaming
self.generator.begin_stream(ids, settings, loras=self.loras)
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py", line 198, in begin_stream
self.begin_stream_ex(input_ids,
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py", line 296, in begin_stream_ex
self._gen_begin_reuse(input_ids, gen_settings)
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py", line 624, in _gen_begin_reuse
self._gen_begin(in_tokens, gen_settings)
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/generator/streaming.py", line 586, in _gen_begin
self.model.forward(self.sequence_ids[:, :-1],
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py", line 694, in forward
r, ls = self._forward(input_ids = input_ids[:, chunk_begin : chunk_end],
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/model.py", line 776, in _forward
x = module.forward(x, cache = cache, attn_params = attn_params, past_len = past_len, loras = loras, **kwargs)
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/exllamav2/attn.py", line 596, in forward
attn_output = flash_attn_func(q_states, k_states, v_states, causal = True)
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 825, in flash_attn_func
return FlashAttnFunc.apply(
File "/usr/local/lib/python3.10/dist-packages/torch/autograd/function.py", line 553, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 507, in forward
out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = _flash_attn_forward(
File "/workspace/venvs/text-generation-webui/lib/python3.10/site-packages/flash_attn/flash_attn_interface.py", line 51, in _flash_attn_forward
out, q, k, v, out_padded, softmax_lse, S_dmask, rng_state = flash_attn_cuda.fwd(
RuntimeError: CUDA error: an illegal memory access was encountered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
```
### System Info
* Ubuntu 22.04 LTS
* Nvidia A5000 GPU on Runpod
* CUDA 12.1
| null | null | https://github.com/oobabooga/text-generation-webui/commit/0877741b0350d200be7f1e6cca2780a25ee29cd0 | {'base_commit': '0877741b0350d200be7f1e6cca2780a25ee29cd0', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}, {'path': 'requirements_amd.txt', 'status': 'modified', 'Loc': {'(None, None, 45)': {'mod': [45, 46, 47]}}}, {'path': 'requirements_amd_noavx2.txt', 'status': 'modified', 'Loc': {'(None, None, 43)': {'mod': [43, 44, 45]}}}, {'path': 'requirements_apple_intel.txt', 'status': 'modified', 'Loc': {'(None, None, 41)': {'mod': [41]}}}, {'path': 'requirements_apple_silicon.txt', 'status': 'modified', 'Loc': {'(None, None, 43)': {'mod': [43]}}}, {'path': 'requirements_noavx2.txt', 'status': 'modified', 'Loc': {'(None, None, 59)': {'mod': [59, 60, 61, 62, 63]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements_apple_silicon.txt",
"requirements_amd_noavx2.txt",
"requirements_apple_intel.txt",
"requirements_amd.txt",
"requirements.txt",
"requirements_noavx2.txt"
],
"asset": []
} | null |
zylon-ai | private-gpt | 89477ea9d3a83181b0222b732a81c71db9edf142 | https://github.com/zylon-ai/private-gpt/issues/2013 | bug | [BUG] Another permissions error when installing with docker-compose | ### Pre-check
- [X] I have searched the existing issues and none cover this bug.
### Description
This looks similar, but not the same as #1876
As for following the instructions, I've not seen any relevant guide to installing with Docker, hence working a bit blind.
Background: I'm trying to run this on an Asustor NAS, which offers very little ability to customize the environment. Ideally, I'd just like to be able to run this by pasting a docker-compose file into Portainer, and having it work it's magic from there:
---
```
sal@halob:/volume1/home/sal/apps/private-gpt $ docker-compose up
[+] Running 3/3
✔ Network private-gpt_default Created 0.1s
✔ Container private-gpt-ollama-1 Created 0.1s
✔ Container private-gpt-private-gpt-1 Created 0.1s
Attaching to ollama-1, private-gpt-1
ollama-1 | Couldn't find '/root/.ollama/id_ed25519'. Generating new private key.
ollama-1 | Your new public key is:
ollama-1 |
ollama-1 | ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBNQkShAIoUDyyueUTiCHM9/AZfZ+rxnUZgmh+YByBVB
ollama-1 |
ollama-1 | 2024/07/23 23:20:28 routes.go:1096: INFO server config env="map[CUDA_VISIBLE_DEVICES: GPU_DEVICE_ORDINAL: HIP_VISIBLE_DEVICES: HSA_OVERRIDE_GFX_VERSION: OLLAMA_DEBUG:false OLLAMA_FLASH_ATTENTION:false OLLAMA_HOST:http://0.0.0.0:11434 OLLAMA_INTEL_GPU:false OLLAMA_KEEP_ALIVE:5m0s OLLAMA_LLM_LIBRARY: OLLAMA_MAX_LOADED_MODELS:0 OLLAMA_MAX_QUEUE:512 OLLAMA_MAX_VRAM:0 OLLAMA_MODELS:/root/.ollama/models OLLAMA_NOHISTORY:false OLLAMA_NOPRUNE:false OLLAMA_NUM_PARALLEL:0 OLLAMA_ORIGINS:[http://localhost https://localhost http://localhost:* https://localhost:* http://127.0.0.1 https://127.0.0.1 http://127.0.0.1:* https://127.0.0.1:* http://0.0.0.0 https://0.0.0.0 http://0.0.0.0:* https://0.0.0.0:* app://* file://* tauri://*] OLLAMA_RUNNERS_DIR: OLLAMA_SCHED_SPREAD:false OLLAMA_TMPDIR: ROCR_VISIBLE_DEVICES:]"
ollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:778 msg="total blobs: 0"
ollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=images.go:785 msg="total unused blobs removed: 0"
ollama-1 | time=2024-07-23T23:20:28.317Z level=INFO source=routes.go:1143 msg="Listening on [::]:11434 (version 0.2.6)"
ollama-1 | time=2024-07-23T23:20:28.318Z level=INFO source=payload.go:30 msg="extracting embedded files" dir=/tmp/ollama1112441504/runners
private-gpt-1 | 23:20:29.406 [INFO ] private_gpt.settings.settings_loader - Starting application with profiles=['default', 'docker']
ollama-1 | time=2024-07-23T23:20:33.589Z level=INFO source=payload.go:44 msg="Dynamic LLM libraries [cpu_avx cpu_avx2 cuda_v11 rocm_v60102 cpu]"
ollama-1 | time=2024-07-23T23:20:33.589Z level=INFO source=gpu.go:205 msg="looking for compatible GPUs"
ollama-1 | time=2024-07-23T23:20:33.589Z level=WARN source=gpu.go:225 msg="CPU does not have minimum vector extensions, GPU inference disabled" required=avx detected="no vector extensions"
ollama-1 | time=2024-07-23T23:20:33.590Z level=INFO source=types.go:105 msg="inference compute" id=0 library=cpu compute="" driver=0.0 name="" total="31.1 GiB" available="28.1 GiB"
private-gpt-1 | There was a problem when trying to write in your cache folder (/nonexistent/.cache/huggingface/hub). You should set the environment variable TRANSFORMERS_CACHE to a writable directory.
private-gpt-1 | None of PyTorch, TensorFlow >= 2.0, or Flax have been found. Models won't be available and only tokenizers, configuration and file/data utilities can be used.
private-gpt-1 | 23:20:40.419 [INFO ] private_gpt.components.llm.llm_component - Initializing the LLM in mode=ollama
private-gpt-1 | Traceback (most recent call last):
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1 | return self._context[key]
private-gpt-1 | ~~~~~~~~~~~~~^^^^^
private-gpt-1 | KeyError: <class 'private_gpt.ui.ui.PrivateGptUi'>
private-gpt-1 |
private-gpt-1 | During handling of the above exception, another exception occurred:
private-gpt-1 |
private-gpt-1 | Traceback (most recent call last):
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1 | return self._context[key]
private-gpt-1 | ~~~~~~~~~~~~~^^^^^
private-gpt-1 | KeyError: <class 'private_gpt.server.ingest.ingest_service.IngestService'>
private-gpt-1 |
private-gpt-1 | During handling of the above exception, another exception occurred:
private-gpt-1 |
private-gpt-1 | Traceback (most recent call last):
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 798, in get
private-gpt-1 | return self._context[key]
private-gpt-1 | ~~~~~~~~~~~~~^^^^^
private-gpt-1 | KeyError: <class 'private_gpt.components.vector_store.vector_store_component.VectorStoreComponent'>
private-gpt-1 |
private-gpt-1 | During handling of the above exception, another exception occurred:
private-gpt-1 |
private-gpt-1 | Traceback (most recent call last):
private-gpt-1 | File "<frozen runpy>", line 198, in _run_module_as_main
private-gpt-1 | File "<frozen runpy>", line 88, in _run_code
private-gpt-1 | File "/home/worker/app/private_gpt/__main__.py", line 5, in <module>
private-gpt-1 | from private_gpt.main import app
private-gpt-1 | File "/home/worker/app/private_gpt/main.py", line 6, in <module>
private-gpt-1 | app = create_app(global_injector)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/private_gpt/launcher.py", line 63, in create_app
private-gpt-1 | ui = root_injector.get(PrivateGptUi)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1 | return function(*args, **kwargs)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1 | return function(*args, **kwargs)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1 | instance = self._get_instance(key, provider, self.injector)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1 | return provider.get(injector)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1 | return injector.create_object(self._cls)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
private-gpt-1 | dependencies = self.args_to_inject(
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1 | return function(*args, **kwargs)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
private-gpt-1 | instance: Any = self.get(interface)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1 | return function(*args, **kwargs)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1 | return function(*args, **kwargs)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1 | instance = self._get_instance(key, provider, self.injector)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1 | return provider.get(injector)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1 | return injector.create_object(self._cls)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1031, in call_with_injection
private-gpt-1 | dependencies = self.args_to_inject(
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1 | return function(*args, **kwargs)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1079, in args_to_inject
private-gpt-1 | instance: Any = self.get(interface)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1 | return function(*args, **kwargs)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 974, in get
private-gpt-1 | provider_instance = scope_instance.get(interface, binding.provider)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 91, in wrapper
private-gpt-1 | return function(*args, **kwargs)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 800, in get
private-gpt-1 | instance = self._get_instance(key, provider, self.injector)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 811, in _get_instance
private-gpt-1 | return provider.get(injector)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 264, in get
private-gpt-1 | return injector.create_object(self._cls)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 998, in create_object
private-gpt-1 | self.call_with_injection(init, self_=instance, kwargs=additional_kwargs)
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/injector/__init__.py", line 1040, in call_with_injection
private-gpt-1 | return callable(*full_args, **dependencies)
private-gpt-1 | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/private_gpt/components/vector_store/vector_store_component.py", line 114, in __init__
private-gpt-1 | client = QdrantClient(
private-gpt-1 | ^^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/qdrant_client.py", line 117, in __init__
private-gpt-1 | self._client = QdrantLocal(
private-gpt-1 | ^^^^^^^^^^^^
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py", line 66, in __init__
private-gpt-1 | self._load()
private-gpt-1 | File "/home/worker/app/.venv/lib/python3.11/site-packages/qdrant_client/local/qdrant_local.py", line 97, in _load
private-gpt-1 | os.makedirs(self.location, exist_ok=True)
private-gpt-1 | File "<frozen os>", line 215, in makedirs
private-gpt-1 | File "<frozen os>", line 225, in makedirs
private-gpt-1 | PermissionError: [Errno 13] Permission denied: 'local_data/private_gpt'
^CGracefully stopping... (press Ctrl+C again to force)
[+] Stopping 2/2
✔ Container private-gpt-private-gpt-1 Stopped 0.3s
✔ Container private-gpt-ollama-1 Stopped
```
### Steps to Reproduce
1. Clone the repo
2. docker-compose build
3. docker-compose up
### Expected Behavior
It should just run
### Actual Behavior
Error, as reported above
### Environment
Running on an Asustor router, docker 25.0.5
### Additional Information
_No response_
### Version
latest
### Setup Checklist
- [X] Confirm that you have followed the installation instructions in the project’s documentation.
- [X] Check that you are using the latest version of the project.
- [X] Verify disk space availability for model storage and data processing.
- [X] Ensure that you have the necessary permissions to run the project.
### NVIDIA GPU Setup Checklist
- [ ] Check that the all CUDA dependencies are installed and are compatible with your GPU (refer to [CUDA's documentation](https://docs.nvidia.com/deploy/cuda-compatibility/#frequently-asked-questions))
- [ ] Ensure an NVIDIA GPU is installed and recognized by the system (run `nvidia-smi` to verify).
- [ ] Ensure proper permissions are set for accessing GPU resources.
- [ ] Docker users - Verify that the NVIDIA Container Toolkit is configured correctly (e.g. run `sudo docker run --rm --gpus all nvidia/cuda:11.0.3-base-ubuntu20.04 nvidia-smi`) | null | https://github.com/zylon-ai/private-gpt/pull/2059 | null | {'base_commit': '89477ea9d3a83181b0222b732a81c71db9edf142', 'files': [{'path': 'Dockerfile.llamacpp-cpu', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 23, 30]}}}, {'path': 'Dockerfile.ollama', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 13, 20]}}}, {'path': 'docker-compose.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 29, 34], 'mod': [15, 47, 60]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docker-compose.yaml"
],
"test": [],
"config": [
"Dockerfile.ollama",
"Dockerfile.llamacpp-cpu"
],
"asset": []
} | 1 |
scikit-learn | scikit-learn | e04b8e70e60df88751af5cd667cafb66dc32b397 | https://github.com/scikit-learn/scikit-learn/issues/26590 | Bug | KNNImputer add_indicator fails to persist where missing data had been present in training | ### Describe the bug
Hello, I've encountered an issue where the KNNImputer fails to record the fields where there were missing data at the time when `.fit` is called, but not recognised if `.transform` is called on a dense matrix. I would have expected it to return a 2x3 matrix rather than 2x2, with `missingindicator_A = False` for all cases.
Reproduction steps below. Any help much appreciated :)
### Steps/Code to Reproduce
```python
>>> import pandas as pd
>>> from sklearn.impute import KNNImputer
>>> knn = KNNImputer(add_indicator=True)
>>> df = pd.DataFrame({'A': [0, None], 'B': [1, 2]})
>>> df
A B
0 0.0 1
1 NaN 2
>>> knn.fit(df)
KNNImputer(add_indicator=True)
>>> pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())
A B missingindicator_A
0 0.0 1.0 0.0
1 0.0 2.0 1.0
>>> df['A'] = 0
>>> pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())
```
### Expected Results
```
A B missingindicator_A
0 0.0 1.0 0.0
1 0.0 2.0 0.0
```
### Actual Results
```pytb
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[30], line 1
----> 1 pd.DataFrame(knn.transform(df), columns=knn.get_feature_names_out())
File /opt/conda/lib/python3.10/site-packages/pandas/core/frame.py:694, in DataFrame.__init__(self, data, index, columns, dtype, copy)
684 mgr = dict_to_mgr(
685 # error: Item "ndarray" of "Union[ndarray, Series, Index]" has no
686 # attribute "name"
(...)
691 typ=manager,
692 )
693 else:
--> 694 mgr = ndarray_to_mgr(
695 data,
696 index,
697 columns,
698 dtype=dtype,
699 copy=copy,
700 typ=manager,
701 )
703 # For data is list-like, or Iterable (will consume into list)
704 elif is_list_like(data):
File /opt/conda/lib/python3.10/site-packages/pandas/core/internals/construction.py:351, in ndarray_to_mgr(values, index, columns, dtype, copy, typ)
346 # _prep_ndarray ensures that values.ndim == 2 at this point
347 index, columns = _get_axes(
348 values.shape[0], values.shape[1], index=index, columns=columns
349 )
--> 351 _check_values_indices_shape_match(values, index, columns)
353 if typ == "array":
355 if issubclass(values.dtype.type, str):
File /opt/conda/lib/python3.10/site-packages/pandas/core/internals/construction.py:422, in _check_values_indices_shape_match(values, index, columns)
420 passed = values.shape
421 implied = (len(index), len(columns))
--> 422 raise ValueError(f"Shape of passed values is {passed}, indices imply {implied}")
ValueError: Shape of passed values is (2, 2), indices imply (2, 3)
```
### Versions
```shell
python3, sklearn = 1.2.1
```
| null | https://github.com/scikit-learn/scikit-learn/pull/26600 | null | {'base_commit': 'e04b8e70e60df88751af5cd667cafb66dc32b397', 'files': [{'path': 'doc/whats_new/v1.3.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'sklearn/impute/_knn.py', 'status': 'modified', 'Loc': {"('KNNImputer', 'transform', 242)": {'mod': [285]}}}, {'path': 'sklearn/impute/tests/test_common.py', 'status': 'modified', 'Loc': {"(None, 'test_keep_empty_features', 171)": {'add': [183]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/impute/_knn.py"
],
"doc": [
"doc/whats_new/v1.3.rst"
],
"test": [
"sklearn/impute/tests/test_common.py"
],
"config": [],
"asset": []
} | 1 |
nvbn | thefuck | 9660ec7813a0e77ec3411682b0084d07b540084e | https://github.com/nvbn/thefuck/issues/543 | Adding sudo works for `aura -Sy` but not `aura -Ay` | `fuck` is unable to add `sudo` to an `aura -Ay` command:
```
$ aura -Ay foobar-beta-git # from AUR
aura >>= You have to use `sudo` for that.
$ fuck
No fucks given
```
But works as expected for `aura -Sy`:
```
$ aura -Sy foobar # pacman alias
error: you cannot perform this operation unless you are root.
aura >>= Please check your input.
$ fuck
sudo aura -Sy foobar [enter/↑/↓/ctrl+c]
```
It's slightly annoying anyway that the `aura` outut is different in these cases, but is it possible for `thefuck` to work-around? Or is the only way for `aura` to give a stderr message containing "root"?
| null | https://github.com/nvbn/thefuck/pull/557 | null | {'base_commit': '9660ec7813a0e77ec3411682b0084d07b540084e', 'files': [{'path': 'thefuck/rules/sudo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"thefuck/rules/sudo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 2707099b23a0a8580731553629566c1182d26f48 | https://github.com/scikit-learn/scikit-learn/issues/29294 | Moderate
help wanted | ConvergenceWarnings cannot be turned off | Hi, I'm unable to turn off convergence warnings from `GraphicalLassoCV`.
I've tried most of the solutions from, and none of them worked (see below for actual implementations):
https://stackoverflow.com/questions/879173/how-to-ignore-deprecation-warnings-in-python
https://stackoverflow.com/questions/32612180/eliminating-warnings-from-scikit-learn/33812427#33812427
https://stackoverflow.com/questions/53968004/how-to-silence-all-sklearn-warning
https://stackoverflow.com/questions/14463277/how-to-disable-python-warnings
Contrary to what the designers of the sklearn's exceptions must have thought when it was implemented, some of us actually use stdout to log important information of the host program for diagnostics purposes. Flooding it with garbage that cannot be turned off, as is in the case with cross-validation, is not ok.
To briefly speak to the severity of the issue, the above sklearn-specific questions relating to suppressing warnings have been viewed ~500K times with combined ~400 upvotes, and dates back 7 years.
I've tried the following (`n_jobs` parameter does not appear to affect the result):
```py
from sklearn.covariance import GraphicalLassoCV
import warnings
warnings.filterwarnings("ignore", category=ConvergenceWarning)
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
```
```py
from sklearn.covariance import GraphicalLassoCV
import warnings
warnings.filterwarnings(action='ignore')
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
```
```py
import warnings
with warnings.catch_warnings():
warnings.simplefilter("ignore", ConvergenceWarning)
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
```
```py
from sklearn.covariance import GraphicalLassoCV
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
```
```py
import contextlib
import os, sys
@contextlib.contextmanager
def suppress_stdout():
with open(os.devnull, 'w') as fnull:
old_stdout = sys.stdout
sys.stdout = fnull
try:
yield
finally:
sys.stdout = old_stdout
with suppress_stdout():
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
```
```py
import logging
logging.captureWarnings(True)
logging.getLogger("py.warnings").setLevel(logging.ERROR)
model = GraphicalLassoCV(n_jobs=4)
model = model.fit(data)
``` | null | https://github.com/scikit-learn/scikit-learn/pull/30380 | null | {'base_commit': '2707099b23a0a8580731553629566c1182d26f48', 'files': [{'path': 'sklearn/utils/parallel.py', 'status': 'modified', 'Loc': {"('_FuncWrapper', 'with_config', 121)": {'add': [122]}, "(None, '_with_config', 24)": {'mod': [24, 26, 27]}, "('Parallel', '__call__', 54)": {'mod': [73, 74, 77]}, "('_FuncWrapper', None, 114)": {'mod': [121]}, "('_FuncWrapper', '__call__', 125)": {'mod': [126, 127, 137, 138]}}}, {'path': 'sklearn/utils/tests/test_parallel.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 11]}, "(None, 'test_dispatch_config_parallel', 56)": {'add': [100]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/utils/parallel.py"
],
"doc": [],
"test": [
"sklearn/utils/tests/test_parallel.py"
],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 7b2b1eff57e41364b4b427e36e766607e7eed3a0 | https://github.com/All-Hands-AI/OpenHands/issues/20 | Control Loop: long term planning and execution | The biggest, most complicated aspect of Devin is long-term planning and execution. I'd like to start a discussion about how this might work in OpenDevin.
There's some [recent prior work from Microsoft](https://arxiv.org/pdf/2403.08299.pdf) with some impressive results. I'll summarize here, with some commentary.
## Overall Flow
* User specifies objective and associated settings
* Conversation Manager kicks in
* Sends convo to Agent Scheduler
* Agents execute commands
* Output is placed back into the conversation
* Rinse and repeat
## Configuraiton
* A YAML file defines a set of actions/commands the bot can take (e.g. `npm test`)
* comment: why not just leave it open-ended?
* You can have different agents with different capabilities, e.g. a "dev agent" and a "reviewer agent", who work collaboratively
* comment: this sounds like MetaGPT
## Components
### Conversation Manager
* maintains message history and command outputs
* decides when to interrupt the conversation
* comment: for what? more info from the user?
* decides when the conversation is over, i.e. task has been completed
* agent can send a "stop" command, max tokens can be reached, problems w/ execution environment
### Parser
* interprets agent output and turns it into commands, file edits, etc
* in case of parsing failure, a message is sent back to the agent to rewrite its command
### Output Organizer
* Takes command output and selectively places it into the conversation history
* sometimes summarizes the content first
* comment: why not just drop everything back into the conversation history (maybe truncating really long CLI output)
### Agent Scheduler
* orchestrates different agents
* uses different algos for deciding who gets to go next
* round-robin: everyone takes turns in order
* token-based: agent gets to keep going until it says it's done
* priority-based: agents go based on (user defined?) priority
### Tools Library
* file editing (can edit entire file, or specify start line and end line)
* retrieval (file contents, `ls`, `grep`). Seems to use vector search as well
* build and execution: abstracts away the implementation in favor of simple commands like `build foo`
* testing and validation: includes linters and bug-finding utils
* git: can commit, push, merge
* communication: can as human for input/feedback, can talk to other agents
### Evaluation Environment
* runs in Docker
| null | https://github.com/All-Hands-AI/OpenHands/pull/3771 | null | {'base_commit': '7b2b1eff57e41364b4b427e36e766607e7eed3a0', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [230]}}}, {'path': 'containers/runtime/README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3, 5, 9]}}}, {'path': 'frontend/src/components/AgentStatusBar.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20, 92], 'mod': [94, 95, 96, 97, 98, 99, 100]}}}, {'path': 'frontend/src/i18n/translation.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [465, 482], 'mod': [75, 81, 87, 339, 344, 389, 392, 393, 397, 402, 407, 412, 417, 422, 427, 432, 437, 442, 447, 452, 457, 462, 467, 472, 478, 490, 496, 499, 502, 505, 508, 511, 514, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 536, 541, 546, 551, 556, 561, 566, 571, 576, 581, 586, 605, 610, 615, 620, 638, 643, 648, 653, 658, 690, 736, 741, 746, 751, 757, 763, 769, 775, 781, 786, 791, 794, 799, 805, 811, 816, 817, 822, 823]}}}, {'path': 'frontend/src/services/actions.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 140], 'mod': [12]}, "(None, 'handleAssistantMessage', 141)": {'add': [153], 'mod': [152]}}}, {'path': 'frontend/src/services/session.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "('Session', None, 11)": {'add': [15, 147, 148]}, "('Session', '_setupSocket', 76)": {'add': [85, 117], 'mod': [97]}, "('Session', 'send', 148)": {'mod': [150]}}}, {'path': 'frontend/src/store.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 21]}}}, {'path': 'frontend/src/types/Message.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33]}}}, {'path': 'frontend/src/types/ResponseType.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3]}}}, {'path': 'openhands/core/main.py', 'status': 'modified', 'Loc': {"(None, 'create_runtime', 50)": {'mod': [58]}}}, {'path': 'openhands/runtime/client/client.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18, 20, 564]}}}, {'path': 'openhands/runtime/client/runtime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "('EventStreamRuntime', '__init__', 115)": {'add': [121, 132, 133, 159, 181], 'mod': [149, 172, 174]}, "('EventStreamRuntime', '_init_container', 197)": {'add': [283], 'mod': [204, 205, 206, 244, 248, 254]}, "('EventStreamRuntime', '_find_available_port', 534)": {'add': [541]}}}, {'path': 'openhands/runtime/e2b/runtime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('E2BRuntime', '__init__', 21)": {'add': [27], 'mod': [29]}}}, {'path': 'openhands/runtime/remote/runtime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "('RemoteRuntime', '__init__', 51)": {'add': [57], 'mod': [171]}}}, {'path': 'openhands/runtime/runtime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('Runtime', '__init__', 54)": {'add': [60, 65]}}}, {'path': 'openhands/server/session/agent.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0]}, "('AgentSession', 'start', 40)": {'add': [48], 'mod': [67]}, "('AgentSession', '_create_security_analyzer', 92)": {'add': [100], 'mod': [99]}, "('AgentSession', '_create_runtime', 105)": {'add': [123], 'mod': [115, 119]}, "('AgentSession', None, 13)": {'add': [125], 'mod': [105]}, "('AgentSession', '_create_controller', 126)": {'mod': [181]}}}, {'path': 'openhands/server/session/manager.py', 'status': 'modified', 'Loc': {"('SessionManager', 'send', 36)": {'mod': [38, 40]}}}, {'path': 'openhands/server/session/session.py', 'status': 'modified', 'Loc': {"('Session', None, 30)": {'add': [35]}, "('Session', '__init__', 37)": {'add': [47]}, "('Session', '_initialize_agent', 71)": {'add': [115]}, "('Session', 'send', 167)": {'add': [174]}, "('Session', 'load_from_data', 192)": {'add': [197]}, '(None, None, None)': {'mod': [24]}, "('Session', 'on_event', 127)": {'mod': [128, 138]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"openhands/runtime/e2b/runtime.py",
"frontend/src/types/Message.tsx",
"frontend/src/types/ResponseType.tsx",
"frontend/src/store.ts",
"openhands/runtime/remote/runtime.py",
"openhands/runtime/runtime.py",
"frontend/src/services/session.ts",
"openhands/server/session/agent.py",
"openhands/core/main.py",
"frontend/src/i18n/translation.json",
"openhands/server/session/session.py",
"openhands/runtime/client/client.py",
"frontend/src/components/AgentStatusBar.tsx",
"openhands/runtime/client/runtime.py",
"frontend/src/services/actions.ts",
"openhands/server/session/manager.py"
],
"doc": [
"containers/runtime/README.md"
],
"test": [],
"config": [
".gitignore"
],
"asset": []
} | 1 | |
All-Hands-AI | OpenHands | 2242702cf94eab7275f2cb148859135018d9b280 | https://github.com/All-Hands-AI/OpenHands/issues/1251 | enhancement | Sandbox Capabilities Framework | **Summary**
We have an existing use case for a Jupyter-aware agent, which always runs in a sandbox where Jupyter is available. There are some other scenarios I can think of where an agent might want some guarantees about what it can do with the sandbox:
* We might want a "postgres migration writer", which needs access to a postgres instance
* We might have a "cypress test creator" agent, which would need access to cypress
* Further down the road, we might want to have an [Open Interpreter](https://github.com/OpenInterpreter/open-interpreter) agent, which needs access to osascript
* etc etc
This proposal would allow agents to guarantee that certain programs are available in the sandbox, or that certain services are running in a predictable way.
What if we did something like this:
**Motivation**
We want agents to be able to have certain guarantees about the sandbox environment. But we also want our sandbox interface to be generic--something like "you have a bash terminal".
The latter is especially important, because we want users to be able to bring their own sandbox images. E.g. you might use an off-the-shelf haskell image if your project uses haskell--otherwise you'd need to go through the install process every time you start OpenDevin, or maintain a fork of the sandbox.
**Technical Design**
* For every requirement we support (e.g. jupyter, postgres, cypress), we have a bash script that
* checks if it's installed
* if not, installs it
* maybe starts something in the background
* Let agents specify a list of requirements
* e.g. CodeActAgent could say requirements: ['jupyter']
* When we start the Agent+Sandbox pair, we run the necessary bash scripts
* should be pretty quick if the requirement is already built into the image
* Then the agent has some guarantees about the requirement being met, and how it's running
* e.g. we can put in the prompt "there's a postgres server running on port 5432, user foo, password bar"
* If there are specific ways of interacting with that env (e.g. for jupyter, it seems we have to write to a websocket that's open in the sandbox?) the agent can implement custom Actions, like run_in_jupyter
**Alternatives to Consider**
* Building a bunch of stuff into one big sandbox
* Building special sandboxes that are required by certain agents (e.g. a JupyterSandbox)
**Additional context**
https://opendevin.slack.com/archives/C06QKSD9UBA/p1713552591042089
| null | https://github.com/All-Hands-AI/OpenHands/pull/1255 | null | {'base_commit': '2242702cf94eab7275f2cb148859135018d9b280', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [220]}}}, {'path': 'agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17]}, "('CodeActAgent', None, 66)": {'add': [71]}}}, {'path': 'opendevin/agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}, "('Agent', None, 11)": {'add': [19]}}}, {'path': 'opendevin/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 20]}, "(None, 'get', 140)": {'add': [147]}}}, {'path': 'opendevin/controller/action_manager.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16]}, "('ActionManager', None, 19)": {'add': [43]}}}, {'path': 'opendevin/controller/agent_controller.py', 'status': 'modified', 'Loc': {"('AgentController', '__init__', 41)": {'add': [55]}}}, {'path': 'opendevin/sandbox/docker/exec_box.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6]}, "('DockerExecBox', None, 36)": {'add': [124]}}}, {'path': 'opendevin/sandbox/docker/local_box.py', 'status': 'modified', 'Loc': {"('LocalBox', None, 25)": {'add': [41]}}}, {'path': 'opendevin/sandbox/docker/ssh_box.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6, 17, 357], 'mod': [359]}, "('DockerSSHBox', 'setup_user', 95)": {'add': [139]}, "('DockerSSHBox', None, 46)": {'add': [210]}, "('DockerSSHBox', 'restart_docker_container', 271)": {'add': [309]}}}, {'path': 'opendevin/sandbox/e2b/sandbox.py', 'status': 'modified', 'Loc': {"('E2BBox', None, 14)": {'add': [63]}}}, {'path': 'opendevin/sandbox/sandbox.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('Sandbox', 'close', 28)": {'add': [29]}, "('Sandbox', None, 8)": {'mod': [8]}}}, {'path': 'opendevin/schema/config.py', 'status': 'modified', 'Loc': {"('ConfigType', None, 4)": {'add': [10]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"opendevin/sandbox/docker/ssh_box.py",
"opendevin/schema/config.py",
"agenthub/codeact_agent/codeact_agent.py",
"opendevin/controller/action_manager.py",
"opendevin/sandbox/docker/local_box.py",
"opendevin/sandbox/e2b/sandbox.py",
"opendevin/sandbox/sandbox.py",
"opendevin/sandbox/docker/exec_box.py",
"opendevin/agent.py",
"opendevin/config.py",
"opendevin/controller/agent_controller.py"
],
"doc": [],
"test": [],
"config": [
"Makefile"
],
"asset": []
} | 1 |
deepfakes | faceswap | 0ea743029db0d47f09d33ef90f50ad84c20b085f | https://github.com/deepfakes/faceswap/issues/263 | Very slow extraction with scripts vs fakeapp 1.1 | 1080ti + OC'd 2600k using winpython 3.6.2 cuda 9.0 and tensorflow 1.6
**Training** utilizes ~50% of the GPU now (which is better than the ~25% utilized with FA 1.1) but extraction doesn't seem to utilize the GPU at all (getting around 1.33it/s) wheras with FA 1.1 I get around 17it/s - tried CNN and it dropped down to taking nearly a minute per file. Although I say it doesn't utilize the GPU it still seems to use all 11GB of RAM on the GPU, just none of the compute cores or processor are in use. CPU is using about 17%.
Tried using extracted data from FA 1.1 with .py -convert but it just says 'no alignment found for file: x" for every file even tho --alignments points to the path with alignments.json
I would've thought the alignments.json from FA 1.1 was compatible so I'm not sure if the above is a separate issue or not. | null | https://github.com/deepfakes/faceswap/pull/259 | null | {'base_commit': '0ea743029db0d47f09d33ef90f50ad84c20b085f', 'files': [{'path': 'lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py', 'status': 'modified', 'Loc': {"(None, 'initialize', 108)": {'add': [126], 'mod': [108, 117, 123, 124, 125]}, "(None, 'extract', 137)": {'mod': [137, 138, 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 183]}}}, {'path': 'lib/cli.py', 'status': 'modified', 'Loc': {"('DirectoryProcessor', 'get_faces_alignments', 140)": {'mod': [149]}, "('DirectoryProcessor', 'get_faces', 159)": {'mod': [161, 165]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {"(None, 'detect_faces', 3)": {'mod': [3, 4]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {"('ExtractTrainingData', 'add_optional_arguments', 22)": {'mod': [25]}, "('ExtractTrainingData', 'process', 79)": {'mod': [95]}, "('ExtractTrainingData', 'processFiles', 100)": {'mod': [105]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/faces_detect.py",
"lib/cli.py",
"lib/FaceLandmarksExtractor/FaceLandmarksExtractor.py",
"scripts/extract.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
fastapi | fastapi | ef176c663195489b44030bfe1fb94a317762c8d5 | https://github.com/fastapi/fastapi/issues/3323 | feature
reviewed | Support PEP 593 `Annotated` for specifying dependencies and parameters | ### First check
* [x] I added a very descriptive title to this issue.
* [x] I used the GitHub search to find a similar issue and didn't find it.
* [x] I searched the FastAPI documentation, with the integrated search.
* [x] I already searched in Google "How to X in FastAPI" and didn't find any information.
* [x] I already read and followed all the tutorial in the docs and didn't find an answer.
* [x] I already checked if it is not related to FastAPI but to [Pydantic](https://github.com/samuelcolvin/pydantic).
* [x] I already checked if it is not related to FastAPI but to [Swagger UI](https://github.com/swagger-api/swagger-ui).
* [x] I already checked if it is not related to FastAPI but to [ReDoc](https://github.com/Redocly/redoc).
* [x] After submitting this, I commit to:
* Read open issues with questions until I find 2 issues where I can help someone and add a comment to help there.
* Or, I already hit the "watch" button in this repository to receive notifications and I commit to help at least 2 people that ask questions in the future.
* Implement a Pull Request for a confirmed bug.
### Example
I propse to allow transforming:
<!-- Replace the code below with your own self-contained, minimal, reproducible, example -->
```Python
from typing import Optional
from fastapi import Depends, FastAPI
app = FastAPI()
async def common_parameters(q: Optional[str] = None, skip: int = 0, limit: int = 100):
return {"q": q, "skip": skip, "limit": limit}
@app.get("/items/")
async def read_items(commons: dict = Depends(common_parameters)):
return commons
```
to
```Python
from typing import Annotated, Optional
from fastapi import Depends, FastAPI
app = FastAPI()
async def common_parameters(q: Optional[str] = None, skip: int = 0, limit: int = 100):
return {"q": q, "skip": skip, "limit": limit}
@app.get("/items/")
async def read_items(commons: Annotated[dict, Depends(common_parameters)]):
return commons
```
### Discussion
[PEP 593](https://www.python.org/dev/peps/pep-0593/) Added `Annotated` for adding additional annotations beyond type annotations. I think FastAPI's `Depends`, `Query`, `Body` and the likes fit well with the kind of additional annotations this supports.
This would also make default values less awkward:
```python
@app.get("/items/")
async def read_items(q: Optional[str] = Query(None, max_length=50)):
pass
```
Could become
```python
@app.get("/items/")
async def read_items(q: Annotated[Optional[str], Query(max_length=50)] = None):
pass
```
This will also solve the issue mentioned [in the docs](https://fastapi.tiangolo.com/tutorial/path-params-numeric-validations/#order-the-parameters-as-you-need) of parameter ordering.
Finally, it is sometimes convenient to use the same function as both a FastAPI dependency and a regular function. In these cases, because `= Depends(...)` is a default parameter value, if you forget to pass a parameter the error is not caught by your IDE. Worse, it is not caught at runtime because Python will just pass along the `Depends` object. This will probably cause an error down the road, but may silently succeed in some cases.
I'm willing to implement this if you think it's a good idea. | null | https://github.com/fastapi/fastapi/pull/4871 | null | {'base_commit': 'ef176c663195489b44030bfe1fb94a317762c8d5', 'files': [{'path': 'fastapi/dependencies/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [58], 'mod': [51]}, "(None, 'get_dependant', 282)": {'add': [336], 'mod': [301, 303, 307, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335]}, "(None, 'get_param_sub_dependant', 114)": {'mod': [115, 117, 118, 119, 120, 121, 124, 126]}, "(None, 'add_non_field_param_to_dependency', 340)": {'mod': [341, 343, 344, 346, 347, 349, 350, 352, 353, 355, 356, 358, 359]}, "(None, 'get_param_field', 364)": {'mod': [364, 366, 368, 369, 370, 371, 372, 373, 374, 375, 376, 377, 378, 379, 380, 384, 385, 386, 387, 388, 389, 390, 391, 392, 393, 394, 395, 396, 397, 398, 399, 400, 401, 402, 403, 404, 405, 406, 407, 408, 409, 410, 411, 412, 413, 414, 416]}}}, {'path': 'fastapi/param_functions.py', 'status': 'modified', 'Loc': {"(None, 'Path', 7)": {'mod': [8]}}}, {'path': 'fastapi/params.py', 'status': 'modified', 'Loc': {"('Path', '__init__', 63)": {'add': [82], 'mod': [65, 85]}, "('Form', '__init__', 280)": {'mod': [282]}, "('File', '__init__', 320)": {'mod': [322]}}}, {'path': 'fastapi/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}, "(None, 'create_response_field', 60)": {'mod': [76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 88]}}}, {'path': 'tests/main.py', 'status': 'modified', 'Loc': {"(None, 'get_path_param_id', 52)": {'mod': [52, 53, 56, 57]}}}, {'path': 'tests/test_application.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [228, 229, 230, 231, 232, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257]}}}, {'path': 'tests/test_params_repr.py', 'status': 'modified', 'Loc': {"(None, 'test_path_repr', 22)": {'mod': [22, 23]}}}, {'path': 'tests/test_path.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [196]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"fastapi/dependencies/utils.py",
"fastapi/utils.py",
"fastapi/param_functions.py",
"tests/main.py",
"fastapi/params.py"
],
"doc": [],
"test": [
"tests/test_params_repr.py",
"tests/test_application.py",
"tests/test_path.py"
],
"config": [],
"asset": []
} | 1 |
python | cpython | e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0 | https://github.com/python/cpython/issues/92417 | docs | Many references to unsupported Python versions in the stdlib docs | **Documentation**
There are currently many places in the stdlib docs where there are needless comments about how to maintain compatibility with Python versions that are now end-of-life. Many of these can now be removed, to improve brevity and clarity in the documentation.
I plan to submit a number of PRs to fix these.
PRs:
- #92418
- #92419
- #92420
- #92421
- #92422
- #92423
- #92424
- #92425
- https://github.com/python/cpython/pull/92502
- #92538
- #92539
- #92543
- #92544
- [More to come]
Backports:
- https://github.com/python/cpython/pull/92459
- https://github.com/python/cpython/pull/92460
- https://github.com/python/cpython/pull/92461
- https://github.com/python/cpython/pull/92462
- https://github.com/python/cpython/pull/92463
- https://github.com/python/cpython/pull/92491
- https://github.com/python/cpython/pull/92467
- https://github.com/python/cpython/pull/92468
- https://github.com/python/cpython/pull/92492
- https://github.com/python/cpython/pull/92464
- https://github.com/python/cpython/pull/92465
- https://github.com/python/cpython/pull/92466
- https://github.com/python/cpython/pull/92472
- https://github.com/python/cpython/pull/92473
- https://github.com/python/cpython/pull/92474
- https://github.com/python/cpython/pull/92485
- https://github.com/python/cpython/pull/92486
- https://github.com/python/cpython/pull/92487
- https://github.com/python/cpython/pull/92606
- https://github.com/python/cpython/pull/92607 | null | https://github.com/python/cpython/pull/92539 | null | {'base_commit': 'e01eeb7b4b8d00b9f5c6acb48957f46ac4e252c0', 'files': [{'path': 'Doc/library/unittest.mock-examples.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [663]}}}, {'path': 'Doc/library/unittest.mock.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2384]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": " ",
"info_type": ""
} | {
"code": [],
"doc": [
"Doc/library/unittest.mock-examples.rst",
"Doc/library/unittest.mock.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | 23d8761615d0417eef5f52cc796518e44d41ca2a | https://github.com/scikit-learn/scikit-learn/issues/19248 | Documentation
module:cluster | Birch should be called BIRCH | C.f. the original paper.
Zhang, T.; Ramakrishnan, R.; Livny, M. (1996). "BIRCH: an efficient data clustering method for very large databases". Proceedings of the 1996 ACM SIGMOD international conference on Management of data - SIGMOD '96. pp. 103–114. doi:10.1145/233269.233324 | null | https://github.com/scikit-learn/scikit-learn/pull/19368 | null | {'base_commit': '23d8761615d0417eef5f52cc796518e44d41ca2a', 'files': [{'path': 'doc/modules/clustering.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [106, 946, 965, 999, 1001, 1005]}}}, {'path': 'examples/cluster/plot_birch_vs_minibatchkmeans.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6, 39, 48, 58, 78]}}}, {'path': 'examples/cluster/plot_cluster_comparison.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [146]}}}, {'path': 'sklearn/cluster/_birch.py', 'status': 'modified', 'Loc': {"('Birch', None, 335)": {'mod': [336]}, "('Birch', '_global_clustering', 648)": {'mod': [677]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"examples/cluster/plot_birch_vs_minibatchkmeans.py",
"sklearn/cluster/_birch.py",
"examples/cluster/plot_cluster_comparison.py"
],
"doc": [
"doc/modules/clustering.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 65b807e4e95fe6da3e30f13e4271dc9dcfaa334e | https://github.com/localstack/localstack/issues/402 | type: bug | Dynamodbstreams Use Kinesis Shard Identifiers | <!-- Love localstack? Please consider supporting our collective:
👉 https://opencollective.com/localstack/donate -->
Dynamodbstreams seem to be making use of Kinesis shard identifiers which are considered invalid by botocore request validators.
Error response from boto3 when attempting to `get_shard_iterator` from shard ids returned from `describe_stream`:
```
[test-integration:L51:27s] exception = ParamValidationError(u'Parameter validation failed:\nInvalid length for parameter ShardId, value: 20, valid range: 28-inf',)
[test-integration:L52:27s]
[test-integration:L53:27s] def _reraise_exception(self, exception):
[test-integration:L54:27s] if hasattr(exception, 'response'):
[test-integration:L55:27s] code = exception.response['Error']['Code']
[test-integration:L56:27s]
[test-integration:L57:27s] if code == 'TrimmedDataAccessException':
[test-integration:L58:27s] raise TrimmedRecordsException()
[test-integration:L59:27s] elif code == 'ResourceNotFoundException':
[test-integration:L60:27s] raise ResourceDNEException()
[test-integration:L61:27s]
[test-integration:L62:27s] > raise exception
[test-integration:L63:27s] E ParamValidationError: Parameter validation failed:
[test-integration:L64:27s] E Invalid length for parameter ShardId, value: 20, valid range: 28-inf
[test-integration:L65:27s]
[test-integration:L66:27s] .tox/py27/lib/python2.7/site-packages/pyrokinesis/dynamodbstreams_ingress_backend.py:111: ParamValidationError
```
The following is the response object I am getting back when I `describe_stream` on the stream's ARN:
```
[test-integration:L68:27s] {'ResponseMetadata': {'HTTPStatusCode': 200, 'RetryAttempts': 0, 'HTTPHeaders': {'content-length': '692', 'access-control-allow-origin': '*', 'date': 'Fri, 13 Oct 2017 12:47:00 GMT', 'server': 'Werkzeug/0.12.2 Python/2.7.13', 'content-type': 'application/json'}}, u'StreamDescription': {u'StreamLabel': u'TODO', u'StreamArn': u'arn:aws:dynamodb:us-east-1:000000000000:table/DynamoTest/stream/2017-10-13T12:47:00', u'Shards': [{u'ShardId': u'shardId-000000000000', u'SequenceNumberRange': {u'StartingSequenceNumber': u'49577893583130519883135457518096755974321873497073123330'}}], u'KeySchema': [{u'KeyType': u'HASH', u'AttributeName': u'ID'}], u'TableName': u'DynamoTest', u'StreamStatus': u'ENABLED'}}
```
My localstack setup:
```
localstack 0.7.3
[localstack:L2:1s] 2017-10-13 15:10:35,915 INFO spawned: 'dashboard' with pid 13
[localstack:L3:1s] 2017-10-13 15:10:35,917 INFO spawned: 'infra' with pid 14
[localstack:L4:1s] (. .venv/bin/activate; bin/localstack web --port=8080)
[localstack:L5:1s] (. .venv/bin/activate; exec bin/localstack start)
[localstack:L6:1s] Starting local dev environment. CTRL-C to quit.
[localstack:L7:1s] * Running on http://0.0.0.0:8080/ (Press CTRL+C to quit)
[localstack:L8:1s] * Restarting with stat
[localstack:L9:1s] Starting mock Kinesis (http port 4568)...
[localstack:L10:1s] Starting mock S3 (http port 4572)...
[localstack:L11:1s] Starting mock DynamoDB (http port 4569)...
[localstack:L12:1s] * Debugger is active!
[localstack:L13:2s] * Debugger PIN: 281-540-735
[localstack:L14:2s] 2017-10-13 15:10:37,123 INFO success: dashboard entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[localstack:L15:2s] 2017-10-13 15:10:37,123 INFO success: infra entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
[localstack:L16:2s] Starting mock DynamoDB Streams service (http port 4570)...
[localstack:L17:2s] Listening at http://:::4565
[localstack:L18:2s] Initializing DynamoDB Local with the following configuration:
[localstack:L19:2s] Port: 4564
[localstack:L20:2s] InMemory: false
[localstack:L21:2s] DbPath: /tmp/localstack/dynamodb
[localstack:L22:2s] SharedDb: true
[localstack:L23:2s] shouldDelayTransientStatuses: false
[localstack:L24:2s] CorsParams: *
[localstack:L25:2s]
[localstack:L26:2s] * Running on http://0.0.0.0:4563/ (Press CTRL+C to quit)
```
| null | https://github.com/localstack/localstack/pull/403 | null | {'base_commit': '65b807e4e95fe6da3e30f13e4271dc9dcfaa334e', 'files': [{'path': 'localstack/services/dynamodbstreams/dynamodbstreams_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 119]}, "(None, 'post_request', 47)": {'add': [76], 'mod': [70, 78]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/services/dynamodbstreams/dynamodbstreams_api.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pallets | flask | ee76129812419d473eb62434051e81d5855255b6 | https://github.com/pallets/flask/issues/602 | Misspelling in docs @ flask.Flask.handle_exception | `Default exception handling that kicks in when an exception occours that is not caught. In debug mode the exception will be re-raised immediately, otherwise it is logged and the handler for a 500 internal server error is used. If no such handler exists, a default 500 internal server error message is displayed.`
Occours should be occurs.
I looked around in the project code to see if i could update this, but it looks like the docs subdir is no longer used? I could be wrong, if you let me know where this is at I'll update it and send a PR :)
| null | https://github.com/pallets/flask/pull/603 | null | {'base_commit': 'ee76129812419d473eb62434051e81d5855255b6', 'files': [{'path': 'flask/app.py', 'status': 'modified', 'Loc': {"('Flask', 'handle_exception', 1266)": {'mod': [1268]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "不太确定问题类别,因为是开发者询问typo error",
"info_type": ""
} | {
"code": [
"flask/app.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
ansible | ansible | 79d00adc52a091d0ddd1d8a96b06adf2f67f161b | https://github.com/ansible/ansible/issues/36378 | cloud
aws
module
affects_2.4
support:certified
docs | Documentation Error for ec2_vpc_nacl rules | ##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
ec2_vpc_nacl
##### ANSIBLE VERSION
```
ansible 2.4.3.0
config file = None
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python2.7/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
##### OS / ENVIRONMENT
N/A
##### SUMMARY
The example documentation is the wrong way round for ec2_vpc_nacl with respect to the icmp code and type.
##### STEPS TO REPRODUCE
https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py#L87 has the order of the `icmp_code` and `icmp_type` inverted compared to the code that parses it https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py#L298
##### EXPECTED RESULTS
##### ACTUAL RESULTS
| null | https://github.com/ansible/ansible/pull/36380 | null | {'base_commit': '79d00adc52a091d0ddd1d8a96b06adf2f67f161b', 'files': [{'path': 'lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [87]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/cloud/amazon/ec2_vpc_nacl.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
geekan | MetaGPT | a32e238801d0a8f3c1bd97b98d038b40977a8cc6 | https://github.com/geekan/MetaGPT/issues/1174 | New provider: Amazon Bedrock (AWS) | **Feature description**
Please include support for Amazon Bedrock models. These models can be from Amazon, Anthropic, AI21, Cohere, Mistral, or Meta Llama 2.
**Your Feature**
1. Create a new LLM Provides under [metagpt/provider](https://github.com/geekan/MetaGPT/tree/db65554c4931d4a95e20331b770cf4f7e5202264/metagpt/provider) for Amazon Bedrock
2. Include it in the [LLMType](https://github.com/geekan/MetaGPT/blob/db65554c4931d4a95e20331b770cf4f7e5202264/metagpt/configs/llm_config.py#L17) available | null | https://github.com/geekan/MetaGPT/pull/1231 | null | {'base_commit': 'a32e238801d0a8f3c1bd97b98d038b40977a8cc6', 'files': [{'path': 'config/puppeteer-config.json', 'status': 'modified', 'Loc': {}}, {'path': 'metagpt/configs/llm_config.py', 'status': 'modified', 'Loc': {"('LLMType', None, 17)": {'add': [34]}, "('LLMConfig', None, 40)": {'add': [80], 'mod': [77]}}}, {'path': 'metagpt/provider/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19, 32]}}}, {'path': 'metagpt/utils/token_counter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [212]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [72]}}}, {'path': 'tests/metagpt/provider/mock_llm_config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [62]}}}, {'path': 'tests/metagpt/provider/req_resp_const.py', 'status': 'modified', 'Loc': {"(None, 'llm_general_chat_funcs_test', 174)": {'add': [185]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"metagpt/utils/token_counter.py",
"metagpt/provider/__init__.py",
"metagpt/configs/llm_config.py",
"config/puppeteer-config.json",
"tests/metagpt/provider/mock_llm_config.py",
"tests/metagpt/provider/req_resp_const.py"
],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | 1 | |
pandas-dev | pandas | 862cd05df4452592a99dd1a4fa10ce8cfb3766f7 | https://github.com/pandas-dev/pandas/issues/37494 | Enhancement
Groupby
ExtensionArray
NA - MaskedArrays
Closing Candidate | ENH: improve the resulting dtype for groupby operations on nullable dtypes | Follow-up on https://github.com/pandas-dev/pandas/pull/37433, and partly related to https://github.com/pandas-dev/pandas/issues/37493
Currently, after groupby operations we try to cast back to the original dtype when possible (at least in case of extension arrays). But this is not always correct, and also not done consistently. Some examples using the test case from the mentioned PR using a nullable Int64 column as input:
```
In [1]: df = DataFrame(
...: {
...: "A": ["A", "B"] * 5,
...: "B": pd.array([1, 2, 3, 4, 5, 6, 7, 8, 9, pd.NA], dtype="Int64"),
...: }
...: )
In [2]: df.groupby("A")["B"].sum()
Out[2]:
A
A 25
B 20
Name: B, dtype: Int64
In [3]: df.groupby("A")["B"].std()
Out[3]:
A
A 3.162278
B 2.581989
Name: B, dtype: float64
In [4]: df.groupby("A")["B"].mean()
Out[4]:
A
A 5
B 5
Name: B, dtype: Int64
In [5]: df.groupby("A")["B"].count()
Out[5]:
A
A 5
B 4
Name: B, dtype: int64
```
So some observations:
* For `sum()`, we correctly have Int64 for the result
* For `std()`, we could use the nullable Float64 instead of float64 dtype
* For `mean()`, we incorrectly cast back to Int64 dtype, as the result of mean should always be floating (in this case the casting just happened to work because the means were rounded numbers)
* For `count()`, we did not create a nullable Int64 dtype for the result, while this could be done in the input is nullable | null | https://github.com/pandas-dev/pandas/pull/38291 | null | {'base_commit': '862cd05df4452592a99dd1a4fa10ce8cfb3766f7', 'files': [{'path': 'pandas/core/dtypes/cast.py', 'status': 'modified', 'Loc': {"(None, 'maybe_cast_result_dtype', 342)": {'mod': [360, 362, 363, 364, 365]}}}, {'path': 'pandas/core/groupby/ops.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47]}, "('BaseGrouper', '_ea_wrap_cython_operation', 493)": {'mod': [524]}}}, {'path': 'pandas/tests/arrays/integer/test_arithmetic.py', 'status': 'modified', 'Loc': {"(None, 'test_reduce_to_float', 261)": {'mod': [280]}}}, {'path': 'pandas/tests/groupby/aggregate/test_cython.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "(None, 'test_cython_agg_nullable_int', 297)": {'add': [314]}}}, {'path': 'pandas/tests/groupby/test_function.py', 'status': 'modified', 'Loc': {"(None, 'test_apply_to_nullable_integer_returns_float', 1091)": {'mod': [1096]}}}, {'path': 'pandas/tests/resample/test_datetime_index.py', 'status': 'modified', 'Loc': {"(None, 'test_resample_integerarray', 112)": {'mod': [127]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/dtypes/cast.py",
"pandas/core/groupby/ops.py"
],
"doc": [],
"test": [
"pandas/tests/groupby/aggregate/test_cython.py",
"pandas/tests/arrays/integer/test_arithmetic.py",
"pandas/tests/resample/test_datetime_index.py",
"pandas/tests/groupby/test_function.py"
],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | eaf0a044fdc084ebeeb9bbfbcf42e6df2b1491bb | https://github.com/scikit-learn/scikit-learn/issues/16730 | Bug
Blocker
module:decomposition | BUG: MLE for PCA mis-estimates rank | After #16224 it looks like this code no longer produces the correct result:
```
import numpy as np
from sklearn.decomposition import PCA
n_samples, n_dim = 1000, 10
X = np.random.RandomState(0).randn(n_samples, n_dim)
X[:, -1] = np.mean(X[:, :-1], axis=-1) # true X dim is ndim - 1
pca_skl = PCA('mle', svd_solver='full')
pca_skl.fit(X)
assert pca_skl.n_components_ == n_dim - 1
```
Before #16224 this passed (`n_components_ == 9`) but after #16224 it gives 8. Not sure why this would happen given the singular value spectrum looks good:
```
import matplotlib.pyplot as plt
s = np.linalg.svdvals(X)
plt.stem(s)
```

Maybe an off-by-one error somewhere?
cc'ing @lschwetlick since it was your PR | null | https://github.com/scikit-learn/scikit-learn/pull/16841 | null | {'base_commit': 'eaf0a044fdc084ebeeb9bbfbcf42e6df2b1491bb', 'files': [{'path': 'doc/whats_new/v0.23.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [142, 143, 144, 145]}}}, {'path': 'sklearn/decomposition/_pca.py', 'status': 'modified', 'Loc': {"(None, '_assess_dimension', 31)": {'mod': [31, 32, 39, 42, 45, 46, 58, 59, 60, 62, 65, 66, 67, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 84, 90, 91, 92, 93, 94, 95, 96]}, "(None, '_infer_dimension', 106)": {'mod': [106, 107, 109, 111, 112, 113, 114]}, "('PCA', '_fit_full', 436)": {'mod': [475]}}}, {'path': 'sklearn/decomposition/tests/test_pca.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [592]}, "(None, 'test_fit_mle_too_few_samples', 615)": {'add': [625], 'mod': [617]}, "(None, 'test_n_components_mle', 291)": {'mod': [298]}, "(None, 'test_infer_dim_1', 326)": {'mod': [336]}, "(None, 'test_infer_dim_2', 340)": {'mod': [351]}, "(None, 'test_infer_dim_3', 354)": {'mod': [364]}, "(None, 'test_infer_dim_bad_spec', 573)": {'mod': [573, 574, 577, 578, 579]}, "(None, 'test_assess_dimension_error_rank_greater_than_features', 582)": {'mod': [582, 583, 584, 586, 587, 588, 589, 590, 591]}, "(None, 'test_assess_dimension_small_eigenvalues', 594)": {'mod': [594, 595, 596, 597, 598, 599, 600, 601, 602]}, "(None, 'test_infer_dim_mle', 605)": {'mod': [605, 606, 607, 608, 612]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/decomposition/_pca.py"
],
"doc": [
"doc/whats_new/v0.23.rst"
],
"test": [
"sklearn/decomposition/tests/test_pca.py"
],
"config": [],
"asset": []
} | 1 |
pallets | flask | 07c7d5730a2685ef2281cc635e289685e5c3d478 | https://github.com/pallets/flask/issues/2813 | Allow flexible routing with SERVER_NAME config | ### Expected Behavior
Deployed a flask application which is reachable over multiple domains and ports:
- external via load balancer: `client - Host: example.org -> LB -> flask app`
- internal via DNS service discovery without load balancer: `client - Host: instance-1231.example.org -> flask app`
If the client connects directly (`Host: instance-1231.example.org`) the app should be able to return absolute and stable URLs like `http://example.org/path/to/my/view` as the URL (`http://instance-1231.example.org/path/to/my/view`) with the internal DNS name is ephemeral.
Therefore I configured the `SERVER_NAME` config key and `url_for` generates the intended absolute URL by using `_external=True` within and without request context. But the app should be still able to route requests coming with `Host: instance-1231.example.org`.
### Actual Behavior
Flasks creates the `werkzeug.routing.MapAdapter` with `server_name=app.config['SERVER_NAME']` and therefore no view method will match to incoming requests with `Host: instance-1231.example.org`.
### Environment
* Python version: 2.7.13 (I'm sorry)
* Flask version: 1.0.2
* Werkzeug version: 0.14.1
### Applied workaround:
Overwrite `Flask.create_url_adapter` and create `MapAdapter` for request context without `server_name` parameter. Routing and URL generation works fine.
| null | https://github.com/pallets/flask/pull/5634 | null | {'base_commit': '07c7d5730a2685ef2281cc635e289685e5c3d478', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25]}}}, {'path': 'docs/config.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [270], 'mod': [263, 264, 266, 267]}}}, {'path': 'src/flask/app.py', 'status': 'modified', 'Loc': {"('Flask', 'create_url_adapter', 423)": {'add': [436], 'mod': [428, 430, 431, 432, 439, 440, 441, 442, 443, 444, 445, 448, 449, 450, 452, 453]}}}, {'path': 'tests/test_basic.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6, 1485]}}}, {'path': 'tests/test_blueprints.py', 'status': 'modified', 'Loc': {"(None, 'test_nesting_subdomains', 953)": {'add': [970], 'mod': [954, 963, 965, 967, 968, 969]}, "(None, 'test_child_and_parent_subdomain', 974)": {'add': [994], 'mod': [975, 976, 978, 985, 987, 989, 990, 991, 992, 993, 997]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [
"src/flask/app.py"
],
"doc": [
"docs/config.rst",
"CHANGES.rst"
],
"test": [
"tests/test_blueprints.py",
"tests/test_basic.py"
],
"config": [],
"asset": []
} | 1 | |
ansible | ansible | 0ffacedb3e41ec49df3606c0df1a1f0688868c32 | https://github.com/ansible/ansible/issues/20199 | affects_2.2
module
bug | Failure while using htpasswd module | _From @apolatynski on December 4, 2016 15:42_
<!--- Verify first that your issue/request is not already reported in GitHub -->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
htpasswd
##### ANSIBLE VERSION
```
ansible 2.2.0.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
Default
##### OS / ENVIRONMENT
ArchLinux
##### SUMMARY
htpasswd module fails with message: `invalid version number '1.7.0.post20161124160753`
Looks like it's related to `python2-passlib` package (installed from archlinux repository).
##### STEPS TO REPRODUCE
Using a role with a task like below
```
htpasswd:
path=/etc/app/auth/htpasswd
name=someuser
crypt_scheme=bcrypt
password={{ password }}
owner=root
mode=0640
```
##### EXPECTED RESULTS
User entry added to htpasswd file.
##### ACTUAL RESULTS
Task failure.
<!--- Paste verbatim command output between quotes below -->
```
fatal: [host]: FAILED! => {
"changed": false,
"failed": true,
"invocation": {
"module_args": {
"backup": null,
"content": null,
"create": true,
"crypt_scheme": "bcrypt",
"delimiter": null,
"directory_mode": null,
"follow": false,
"force": null,
"group": null,
"mode": "0640",
"name": "someuser",
"owner": "root",
"password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"path": "/etc/app/auth/htpasswd",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"unsafe_writes": null
},
"module_name": "htpasswd"
},
"msg": "invalid version number '1.7.0.post20161124160753'"
}
```
_Copied from original issue: ansible/ansible-modules-core#5816_ | null | https://github.com/ansible/ansible/pull/20202 | null | {'base_commit': '0ffacedb3e41ec49df3606c0df1a1f0688868c32', 'files': [{'path': 'lib/ansible/modules/web_infrastructure/htpasswd.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [106]}, "(None, 'present', 126)": {'mod': [140, 151]}, "(None, 'absent', 174)": {'mod': [178]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/web_infrastructure/htpasswd.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4 | https://github.com/yt-dlp/yt-dlp/issues/2237 | site-enhancement | [YouTube] Add the Channel Banner link to the info.json when downloading a channel's videos | ### Checklist
- [X] I'm reporting a site feature request
- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Example URLs
https://www.youtube.com/c/jschlattLIVE
https://yt3.ggpht.com/DEcH0YOk5KknRHoC-QerpZVFUsldfTTM0ZarVr55rarrTbywYBBCKru61973B3l2t2g0hqV9jg=w2120-fcrop64=1,00000000ffffffff-k-c0xffffffff-no-nd-rj
### Description
When using a YouTube channel as the link and `--write-info-json` is used, it should fetch the link for the channel banner.
The manual method to downloading a channel's banner is to right click View Page Source on the banner, search "tvbanner", and find the link for the banner. If yt-dlp automated this process (in the same way it does the profile picture), that would be a great feature!
### Verbose log
```shell
C:\Users\Ben\Videos\test>youtube-dl --flat-playlist --write-info-json --verbose https://www.youtube.com/channel/UCWZp4y1jqBuvLtiyxSs_ZBw
[debug] Command-line config: ['--flat-playlist', '--write-info-json', '--verbose', 'https://www.youtube.com/channel/UCWZp4y1jqBuvLtiyxSs_ZBw']
[debug] Encodings: locale cp1252, fs utf-8, out utf-8, err utf-8, pref cp1252
[debug] yt-dlp version 2021.12.27 [6223f67] (win_exe)
[debug] Python version 3.8.10 (CPython 64bit) - Windows-10-10.0.17763-SP0
[debug] exe versions: ffmpeg git-2020-03-15-c467328, ffprobe git-2020-03-15-c467328
[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets
[debug] Proxy map: {}
[debug] [youtube:tab] Extracting URL: https://www.youtube.com/channel/UCWZp4y1jqBuvLtiyxSs_ZBw
[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw: Downloading webpage
WARNING: [youtube:tab] A channel/user page was given. All the channel's videos will be downloaded. To download only the videos in the home page, add a "/featured" to the URL
[debug] [youtube:tab] Final URL: https://www.youtube.com/channel/UCWZp4y1jqBuvLtiyxSs_ZBw/videos
[download] Downloading playlist: Big guy - Videos
[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 1: Downloading API JSON
[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 2: Downloading API JSON
[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 3: Downloading API JSON
[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 4: Downloading API JSON
[youtube:tab] UCWZp4y1jqBuvLtiyxSs_ZBw page 5: Downloading API JSON
[info] Writing playlist metadata as JSON to: Big guy - Videos [UCWZp4y1jqBuvLtiyxSs_ZBw].info.json
[youtube:tab] playlist Big guy - Videos: Downloading 154 videos
[download] Downloading video 1 of 154
[download] Downloading video 2 of 154
[download] Downloading video 3 of 154
[download] Downloading video 4 of 154
[download] Downloading video 5 of 154
[download] Downloading video 6 of 154
[download] Downloading video 7 of 154
[download] Downloading video 8 of 154
[download] Downloading video 9 of 154
[download] Downloading video 10 of 154
[download] Downloading video 11 of 154
[download] Downloading video 12 of 154
[download] Downloading video 13 of 154
[download] Downloading video 14 of 154
[download] Downloading video 15 of 154
[download] Downloading video 16 of 154
[download] Downloading video 17 of 154
[download] Downloading video 18 of 154
[download] Downloading video 19 of 154
[download] Downloading video 20 of 154
[download] Downloading video 21 of 154
[download] Downloading video 22 of 154
[download] Downloading video 23 of 154
[download] Downloading video 24 of 154
[download] Downloading video 25 of 154
[download] Downloading video 26 of 154
[download] Downloading video 27 of 154
[download] Downloading video 28 of 154
[download] Downloading video 29 of 154
[download] Downloading video 30 of 154
[download] Downloading video 31 of 154
[download] Downloading video 32 of 154
[download] Downloading video 33 of 154
[download] Downloading video 34 of 154
[download] Downloading video 35 of 154
[download] Downloading video 36 of 154
[download] Downloading video 37 of 154
[download] Downloading video 38 of 154
[download] Downloading video 39 of 154
[download] Downloading video 40 of 154
[download] Downloading video 41 of 154
[download] Downloading video 42 of 154
[download] Downloading video 43 of 154
[download] Downloading video 44 of 154
[download] Downloading video 45 of 154
[download] Downloading video 46 of 154
[download] Downloading video 47 of 154
[download] Downloading video 48 of 154
[download] Downloading video 49 of 154
[download] Downloading video 50 of 154
[download] Downloading video 51 of 154
[download] Downloading video 52 of 154
[download] Downloading video 53 of 154
[download] Downloading video 54 of 154
[download] Downloading video 55 of 154
[download] Downloading video 56 of 154
[download] Downloading video 57 of 154
[download] Downloading video 58 of 154
[download] Downloading video 59 of 154
[download] Downloading video 60 of 154
[download] Downloading video 61 of 154
[download] Downloading video 62 of 154
[download] Downloading video 63 of 154
[download] Downloading video 64 of 154
[download] Downloading video 65 of 154
[download] Downloading video 66 of 154
[download] Downloading video 67 of 154
[download] Downloading video 68 of 154
[download] Downloading video 69 of 154
[download] Downloading video 70 of 154
[download] Downloading video 71 of 154
[download] Downloading video 72 of 154
[download] Downloading video 73 of 154
[download] Downloading video 74 of 154
[download] Downloading video 75 of 154
[download] Downloading video 76 of 154
[download] Downloading video 77 of 154
[download] Downloading video 78 of 154
[download] Downloading video 79 of 154
[download] Downloading video 80 of 154
[download] Downloading video 81 of 154
[download] Downloading video 82 of 154
[download] Downloading video 83 of 154
[download] Downloading video 84 of 154
[download] Downloading video 85 of 154
[download] Downloading video 86 of 154
[download] Downloading video 87 of 154
[download] Downloading video 88 of 154
[download] Downloading video 89 of 154
[download] Downloading video 90 of 154
[download] Downloading video 91 of 154
[download] Downloading video 92 of 154
[download] Downloading video 93 of 154
[download] Downloading video 94 of 154
[download] Downloading video 95 of 154
[download] Downloading video 96 of 154
[download] Downloading video 97 of 154
[download] Downloading video 98 of 154
[download] Downloading video 99 of 154
[download] Downloading video 100 of 154
[download] Downloading video 101 of 154
[download] Downloading video 102 of 154
[download] Downloading video 103 of 154
[download] Downloading video 104 of 154
[download] Downloading video 105 of 154
[download] Downloading video 106 of 154
[download] Downloading video 107 of 154
[download] Downloading video 108 of 154
[download] Downloading video 109 of 154
[download] Downloading video 110 of 154
[download] Downloading video 111 of 154
[download] Downloading video 112 of 154
[download] Downloading video 113 of 154
[download] Downloading video 114 of 154
[download] Downloading video 115 of 154
[download] Downloading video 116 of 154
[download] Downloading video 117 of 154
[download] Downloading video 118 of 154
[download] Downloading video 119 of 154
[download] Downloading video 120 of 154
[download] Downloading video 121 of 154
[download] Downloading video 122 of 154
[download] Downloading video 123 of 154
[download] Downloading video 124 of 154
[download] Downloading video 125 of 154
[download] Downloading video 126 of 154
[download] Downloading video 127 of 154
[download] Downloading video 128 of 154
[download] Downloading video 129 of 154
[download] Downloading video 130 of 154
[download] Downloading video 131 of 154
[download] Downloading video 132 of 154
[download] Downloading video 133 of 154
[download] Downloading video 134 of 154
[download] Downloading video 135 of 154
[download] Downloading video 136 of 154
[download] Downloading video 137 of 154
[download] Downloading video 138 of 154
[download] Downloading video 139 of 154
[download] Downloading video 140 of 154
[download] Downloading video 141 of 154
[download] Downloading video 142 of 154
[download] Downloading video 143 of 154
[download] Downloading video 144 of 154
[download] Downloading video 145 of 154
[download] Downloading video 146 of 154
[download] Downloading video 147 of 154
[download] Downloading video 148 of 154
[download] Downloading video 149 of 154
[download] Downloading video 150 of 154
[download] Downloading video 151 of 154
[download] Downloading video 152 of 154
[download] Downloading video 153 of 154
[download] Downloading video 154 of 154
[info] Writing updated playlist metadata as JSON to: Big guy - Videos [UCWZp4y1jqBuvLtiyxSs_ZBw].info.json
[download] Finished downloading playlist: Big guy - Videos
```
| null | https://github.com/yt-dlp/yt-dlp/pull/2400 | null | {'base_commit': '135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4', 'files': [{'path': 'yt_dlp/extractor/youtube.py', 'status': 'modified', 'Loc': {"('YoutubeTabBaseInfoExtractor', '_extract_from_tabs', 3894)": {'mod': [3916, 3917, 3918, 3919, 3938]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"yt_dlp/extractor/youtube.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | a8968bfa696d51f73769c54f2630a9530488236a | https://github.com/pandas-dev/pandas/issues/46804 | Docs | DOC: building page for nested methods doesn't work | The following
```
python make.py --single pandas.Series.str.rsplit
```
fails to produce the docs:
```
(pandas-dev) marcogorelli@OVMG025 doc % python make.py clean && python make.py --single pandas.Series.str.rsplit
Running Sphinx v4.4.0
loading translations [en]... done
making output directory... done
[autosummary] generating autosummary for: index.rst
[autosummary] generating autosummary for: /Users/marcogorelli/pandas-dev/doc/source/reference/api/pandas.Series.str.rsplit.rst
building [mo]: targets for 0 po files that are out of date
building [html]: targets for 1 source files that are out of date
updating environment: [new config] 2 added, 0 changed, 0 removed
reading sources... [100%] reference/api/pandas.Series.str.rsplit
WARNING: autodoc: failed to import method 'str.rsplit' from module 'Series'; the following exception was raised:
No module named 'Series'
looking for now-outdated files... none found
pickling environment... done
checking consistency... done
preparing documents... done
/Users/marcogorelli/pandas-dev/doc/source/index.rst:44: WARNING: 'any' reference target not found: getting_started
/Users/marcogorelli/pandas-dev/doc/source/index.rst:60: WARNING: 'any' reference target not found: user_guide
/Users/marcogorelli/pandas-dev/doc/source/index.rst:77: WARNING: 'any' reference target not found: api
/Users/marcogorelli/pandas-dev/doc/source/index.rst:94: WARNING: 'any' reference target not found: development
writing output... [100%] reference/api/pandas.Series.str.rsplit
waiting for workers...
generating indices... genindex py-modindex done
writing additional pages... search done
copying images... [100%] _static/index_contribute.svg
copying static files... done
copying extra files... done
dumping search index in English (code: en)... done
dumping object inventory... done
build succeeded, 5 warnings.
```
However, it works just fine to do
```
python make.py --single pandas.Series.value_counts
```
I haven't figured out how to address this, so opening an issue for now | null | https://github.com/pandas-dev/pandas/pull/46806 | null | {'base_commit': 'a8968bfa696d51f73769c54f2630a9530488236a', 'files': [{'path': '.github/workflows/code-checks.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82]}}}, {'path': '.github/workflows/docbuild-and-upload.yml', 'status': 'modified', 'Loc': {}}, {'path': 'ci/code_checks.sh', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14, 104], 'mod': [16]}}}, {'path': 'doc/source/index.rst.template', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [28, 99, 105, 108]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
".github/workflows/docbuild-and-upload.yml",
"doc/source/index.rst.template"
],
"test": [],
"config": [
".github/workflows/code-checks.yml"
],
"asset": [
"ci/code_checks.sh"
]
} | 1 |
pandas-dev | pandas | e88c39225ef545123860c679822f1b567fe65c27 | https://github.com/pandas-dev/pandas/issues/33428 | Docs
good first issue | DOC: Data links in Pandas API Reference are broken 404 | #### Location of the documentation
https://pandas.pydata.org/docs/reference/api/pandas.plotting.parallel_coordinates.html
...probably many examples in other sections
#### Documentation problem
Results in 404 not found error
df = pd.read_csv('https://raw.github.com/pandas-dev/pandas/master'
'/pandas/tests/data/csv/iris.csv')
#### Suggested fix for documentation
The GitHub site should be "raw.githubusercontent.com"
| null | https://github.com/pandas-dev/pandas/pull/33099 | null | {'base_commit': 'e88c39225ef545123860c679822f1b567fe65c27', 'files': [{'path': 'pandas/plotting/_misc.py', 'status': 'modified', 'Loc': {"(None, 'parallel_coordinates', 311)": {'mod': [362]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/plotting/_misc.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | c1bed601e9b9a3f5fa8fb529cfa40df7a3a0b903 | https://github.com/ultralytics/yolov5/issues/4970 | question | Cannot load the model | I get an error when I run this code torch.hub.load('ultralytics/yolov5', 'custom', path='yolov5/path/last.pt', force_reload=True)
It was working until yesterday and now I receive an error "raise ValueError("{!r} does not start with {!r}"
ValueError: 'C:\\Users\\aaa\\.cache\\torch\\hub\\ultralytics_yolov5_master' does not start with 'C:\\Users\\aaa\\PycharmProjects\\project\\proejct1'". I have removed the files inside the cache folder but it doesn't fix the error...
Any suggestions will be appreciated. Thank you | null | https://github.com/ultralytics/yolov5/pull/4974 | null | {'base_commit': 'c1bed601e9b9a3f5fa8fb529cfa40df7a3a0b903', 'files': [{'path': 'models/tf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23]}}}, {'path': 'models/yolo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [18]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"models/tf.py",
"models/yolo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 674fb96b33c07c680844f674fcdf0767b6e3c2f9 | https://github.com/pandas-dev/pandas/issues/17200 | IO Data
IO JSON | read_json(lines=True) broken for s3 urls in Python 3 (v0.20.3) | #### Code Sample, a copy-pastable example if possible
Using Python
```python
import pandas as pd
inputdf = pd.read_json(path_or_buf="s3://path/to/python-lines/file.json", lines=True)
```
The file is similar to:
```
{"url": "blah", "other": "blah"}
{"url": "blah", "other": "blah"}
{"url": "blah", "other": "blah"}
```
#### Problem description
When attempting to read a python lines file into a DataFrame using the s3 protocol, the above code will error with:
```
2017-08-08 11:06:14,225 - image_rank_csv - ERROR - initial_value must be str or None, not bytes
Traceback (most recent call last):
File "image_rank_csv.py", line 62, in run
inputdf = pd.read_json(path_or_buf="s3://path/to/python-lines/file.json", lines=True)
File "...env/lib/python3.6/site-packages/pandas/io/json/json.py", line 347, in read_json
lines = list(StringIO(json.strip()))
TypeError: initial_value must be str or None, not bytes
```
This works fine if the file is local, e.g.:
```python
import pandas as pd
inputdf = pd.read_json(path_or_buf="/local/path/to/python-lines/file.json", lines=True)
```
#### Expected Output
Expect to successfully read the file and error above not to occur.
My current thinking is that when we get the file handle: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L333 , you delegate to `s3fs`, which documents that [it only operates in Binary mode](http://s3fs.readthedocs.io/en/latest/#limitations). Therefore when you `read()`: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L335, Therefore passing to `StringIO` will fail here: https://github.com/pandas-dev/pandas/blob/v0.20.3/pandas/io/json/json.py#L347 . Maybe it needs a different handler for `BytesIO`?
#### Output of ``pd.show_versions()``
<details>
[paste the output of ``pd.show_versions()`` here below this line]
```
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.1.final.0
python-bits: 64
OS: Darwin
OS-release: 16.6.0
machine: x86_64
processor: i386
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.20.3
pytest: None
pip: 9.0.1
setuptools: 36.2.7
Cython: None
numpy: 1.12.0
scipy: 0.19.1
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.6.0
pytz: 2017.2
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: 2.6.2 (dt dec pq3 ext lo64)
jinja2: None
s3fs: 0.1.2
pandas_gbq: None
pandas_datareader: None
```
</details> | null | https://github.com/pandas-dev/pandas/pull/17201 | null | {'base_commit': '674fb96b33c07c680844f674fcdf0767b6e3c2f9', 'files': [{'path': 'doc/source/whatsnew/v0.21.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [91]}}}, {'path': 'pandas/io/json/json.py', 'status': 'modified', 'Loc': {"('JsonReader', 'read', 456)": {'add': [460], 'mod': [462]}, '(None, None, None)': {'mod': [8]}, "('Parser', '_try_convert_data', 595)": {'mod': [615, 631, 642, 654, 664]}, "('Parser', '_try_convert_to_date', 669)": {'mod': [683, 700]}}}, {'path': 'pandas/tests/io/json/test_pandas.py', 'status': 'modified', 'Loc': {"('TestPandasContainer', None, 38)": {'add': [1034]}}}, {'path': 'pandas/tests/io/parser/test_network.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [7, 10, 18, 19, 20, 23, 24, 25, 26, 29, 30, 31, 32, 34, 35, 36, 37, 38, 40, 41, 42, 43, 44, 45, 47, 48, 49, 51, 52, 53, 55, 56, 58, 60]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/io/json/json.py"
],
"doc": [
"doc/source/whatsnew/v0.21.1.txt"
],
"test": [
"pandas/tests/io/parser/test_network.py",
"pandas/tests/io/json/test_pandas.py"
],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 1ddf398a81d23772fc9ac231a4e774af932f8360 | https://github.com/All-Hands-AI/OpenHands/issues/3031 | bug
enhancement
severity:medium
tracked | [Runtime] Mega-issue to track all issues related to bash Interactive terminal | This is a mega-issue tracker for the **Interactive terminal** issue peoples run into.
- [ ] https://github.com/OpenDevin/OpenDevin/issues/2754
- [ ] https://github.com/OpenDevin/OpenDevin/issues/3008
- [ ] https://github.com/OpenDevin/OpenDevin/issues/2799
- [ ] https://github.com/OpenDevin/OpenDevin/issues/892
- [ ] https://github.com/OpenDevin/OpenDevin/issues/3030
- [ ] https://github.com/OpenDevin/OpenDevin/issues/3176
Feel free to expand this list if i missed any relevant issue!
---
# Cause
These are typically caused by the same reason: OpenDevin uses [`pexcept`](https://pexpect.readthedocs.io/en/stable/overview.html) to interact with Bash shells, however, the current parsing logic only looks for the *next* `PS1` prompt (e.g., something like `root@hostname:/folderABC $`).
This will keep looking for such a pattern until it timeout, causing the following things to break, as listed in the PR above:
- Open a new interactive program (e.g., `python3`), where the new prompt changes to `>>`
- Open a new text editor (e.g., `nano`, `vim`), where the display could be completely broken? (I'm not familiar with the protocol here, though)
- Enter a new conda virtual environment: conda will prepend the env name (e.g., `(base)`) before the `PS1` prompt, causing the current `pexpect` parsing to break
- When the agent is asked for password (e.g., with patterns like `Password:`)
- Prompt like `(yes/no/[fingerprint])` requesting user confirmation.
# Fixes
We plan to resolve them as much as I can once arch refactor https://github.com/OpenDevin/OpenDevin/issues/2404 is completed. But these are a non-exhaustive list of patterns we are trying to `pexcept` and we cannot list everything here:
1. [ ] Try to cover common use cases of these prompts (e.g., `[yes/no]` pattern, conda environment pattern)
2. [ ] Figure out a more general way (rather than writing rules) for agents to interact with these (e.g., we don't write every rules explicitly, but for example, if we've been waiting for more than 5s and there's no new output from the terminal, it probably means it is waiting for user input and we should and it over to the agent - subsequently, we may need to allow agent to issue special keyboard actions like `ctrl+D` `ctrl+C`, etc).
3. [ ] Add something in the prompt that forbids agent goes into interactive programs (e.g., interactive Python, vim, nano, etc)
4. [ ] We need a way to detect if the agent accidentally goes into such an interactive program, and we need a way to force it out (we currently send `ctrl+C`, which might not work for a large variety of programs like `vim`).
# If you want to help!
Try to take a look at our existing bash parsing logic for the new architecture (under development!):
https://github.com/OpenDevin/OpenDevin/blob/8bfa61f3e4beceb690562b4d105aa01dc50d58d7/opendevin/runtime/client/client.py#L62-L111
You can help to:
1. Write test cases into `https://github.com/OpenDevin/OpenDevin/blob/main/tests/unit/test_runtime.py` to expose these interactive bash issues
2. Try to fix them inside the `client/client.py` (and/or the `ssh_box.py` - but we plan to deprecate them soon, so only supporting these on `EventStreamRuntime` should be sufficient!) | null | https://github.com/All-Hands-AI/OpenHands/pull/4881 | null | {'base_commit': '1ddf398a81d23772fc9ac231a4e774af932f8360', 'files': [{'path': '.github/workflows/dummy-agent-test.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}, {'path': '.github/workflows/eval-runner.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [31]}}}, {'path': '.github/workflows/py-unit-tests-mac.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33]}}}, {'path': '.github/workflows/py-unit-tests.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32]}}}, {'path': 'docs/static/img/backend_architecture.puml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [126]}}}, {'path': 'evaluation/benchmarks/agent_bench/run_infer.py', 'status': 'modified', 'Loc': {"(None, 'complete_runtime', 111)": {'mod': [140, 167, 168]}}}, {'path': 'evaluation/benchmarks/aider_bench/run_infer.py', 'status': 'modified', 'Loc': {"(None, 'complete_runtime', 123)": {'mod': [148, 149, 150, 151]}}}, {'path': 'evaluation/benchmarks/biocoder/run_infer.py', 'status': 'modified', 'Loc': {"(None, 'complete_runtime', 168)": {'mod': [202, 226, 227, 228]}}}, {'path': 'evaluation/benchmarks/bird/README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [130]}}}, {'path': 'evaluation/benchmarks/bird/run_infer.py', 'status': 'modified', 'Loc': {"(None, 'initialize_runtime', 249)": {'mod': [271, 272, 273, 274]}, "(None, 'complete_runtime', 283)": {'mod': [303, 304, 305, 306]}}}, {'path': 'evaluation/benchmarks/humanevalfix/README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74, 101, 128]}}}, {'path': 'evaluation/benchmarks/humanevalfix/run_infer.py', 'status': 'modified', 'Loc': {"(None, 'complete_runtime', 151)": {'mod': [174, 175, 176]}}}, {'path': 'evaluation/benchmarks/ml_bench/run_infer.py', 'status': 'modified', 'Loc': {"(None, 'complete_runtime', 145)": {'mod': [166]}}}, {'path': 'evaluation/benchmarks/scienceagentbench/run_infer.py', 'status': 'modified', 'Loc': {"(None, 'initialize_runtime', 91)": {'mod': [124, 125, 126, 127]}, "(None, 'complete_runtime', 136)": {'mod': [157, 158, 159, 160]}}}, {'path': 'evaluation/benchmarks/swe_bench/eval_infer.py', 'status': 'modified', 'Loc': {"(None, 'process_instance', 96)": {'add': [100, 148], 'mod': [180, 203, 204, 205, 227, 245]}}}, {'path': 'evaluation/benchmarks/swe_bench/run_infer.py', 'status': 'modified', 'Loc': {"(None, 'initialize_runtime', 156)": {'add': [284]}, '(None, None, None)': {'add': [537]}, "(None, 'complete_runtime', 290)": {'mod': [340, 341]}, "(None, 'process_instance', 369)": {'mod': [388]}}}, {'path': 'evaluation/benchmarks/swe_bench/scripts/eval/compare_outputs.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [109], 'mod': [107]}}}, {'path': 'evaluation/benchmarks/swe_bench/scripts/eval/convert_oh_output_to_md.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22, 86], 'mod': [88]}, "(None, 'write_row_to_md_file', 53)": {'mod': [53, 61, 62]}}}, {'path': 'evaluation/benchmarks/swe_bench/scripts/eval/update_output_with_eval.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [113]}}}, {'path': 'evaluation/integration_tests/tests/t01_fix_simple_typo.py', 'status': 'modified', 'Loc': {"('Test', 'verify_result', 25)": {'mod': [27]}}}, {'path': 'evaluation/integration_tests/tests/t02_add_bash_hello.py', 'status': 'modified', 'Loc': {"('Test', 'initialize_runtime', 12)": {'mod': [13]}, "('Test', 'verify_result', 18)": {'mod': [20, 29]}}}, {'path': 'evaluation/integration_tests/tests/t03_jupyter_write_file.py', 'status': 'modified', 'Loc': {"('Test', 'initialize_runtime', 12)": {'mod': [13]}, "('Test', 'verify_result', 18)": {'mod': [20, 29]}}}, {'path': 'evaluation/integration_tests/tests/t04_git_staging.py', 'status': 'modified', 'Loc': {"('Test', 'initialize_runtime', 12)": {'mod': [13, 18, 23, 24, 25, 30]}, "('Test', 'verify_result', 35)": {'mod': [37, 46]}}}, {'path': 'evaluation/integration_tests/tests/t05_simple_browsing.py', 'status': 'modified', 'Loc': {"('Test', 'initialize_runtime', 85)": {'mod': [86, 90, 104, 105]}}}, {'path': 'frontend/src/services/actions.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}, "(None, 'handleActionMessage', 60)": {'add': [64]}}}, {'path': 'frontend/src/services/observations.ts', 'status': 'modified', 'Loc': {"(None, 'handleObservationMessage', 14)": {'mod': [83, 84]}}}, {'path': 'frontend/src/state/chat-slice.ts', 'status': 'modified', 'Loc': {"(None, 'addAssistantAction', 88)": {'mod': [96]}, "(None, 'addAssistantObservation', 127)": {'mod': [147, 161]}}}, {'path': 'frontend/src/types/core/observations.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18], 'mod': [16, 17]}}}, {'path': 'frontend/src/types/message.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [30, 31]}}}, {'path': 'openhands/agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {"('CodeActAgent', 'get_observation_message', 238)": {'mod': [280]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"openhands/agenthub/codeact_agent/codeact_agent.py",
"evaluation/benchmarks/agent_bench/run_infer.py",
"evaluation/benchmarks/bird/run_infer.py",
"frontend/src/services/observations.ts",
"evaluation/benchmarks/humanevalfix/run_infer.py",
"evaluation/benchmarks/scienceagentbench/run_infer.py",
"evaluation/benchmarks/swe_bench/scripts/eval/update_output_with_eval.py",
"evaluation/integration_tests/tests/t03_jupyter_write_file.py",
"evaluation/benchmarks/swe_bench/run_infer.py",
"evaluation/integration_tests/tests/t01_fix_simple_typo.py",
"frontend/src/state/chat-slice.ts",
"frontend/src/types/message.tsx",
"evaluation/benchmarks/swe_bench/scripts/eval/compare_outputs.py",
"evaluation/integration_tests/tests/t05_simple_browsing.py",
"evaluation/benchmarks/ml_bench/run_infer.py",
"evaluation/integration_tests/tests/t02_add_bash_hello.py",
"evaluation/benchmarks/biocoder/run_infer.py",
"frontend/src/types/core/observations.ts",
"evaluation/integration_tests/tests/t04_git_staging.py",
"evaluation/benchmarks/swe_bench/eval_infer.py",
"evaluation/benchmarks/aider_bench/run_infer.py",
"frontend/src/services/actions.ts",
"evaluation/benchmarks/swe_bench/scripts/eval/convert_oh_output_to_md.py"
],
"doc": [
"docs/static/img/backend_architecture.puml",
"evaluation/benchmarks/humanevalfix/README.md",
"evaluation/benchmarks/bird/README.md"
],
"test": [],
"config": [
".github/workflows/dummy-agent-test.yml",
".github/workflows/py-unit-tests-mac.yml",
".github/workflows/py-unit-tests.yml",
".github/workflows/eval-runner.yml"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 23a7057be29ed7de44b5705d5bb4c4d0bbdea089 | https://github.com/All-Hands-AI/OpenHands/issues/813 | bug | error seed': 42 | Hi! I'm OpenDevin, an AI Software Engineer. What would you like to build with me today?
user avatar
bana mali danışmanlık firması için web sitesi tasarla ve çalıştır. Detaylı ve kapsamlı bir çalışma olsun.
assistant avatar
Starting new task...
assistant avatar
Oops. Something went wrong: gemini does not support parameters: {'seed': 42}. To drop these, set `litellm.drop_params=True` or for proxy: `litellm_settings: drop_params: true`
assistant avatar
Oops. Something went wrong: Expecting CmdRunAction or AgentEchoAction for Action | null | https://github.com/All-Hands-AI/OpenHands/pull/830 | null | {'base_commit': '23a7057be29ed7de44b5705d5bb4c4d0bbdea089', 'files': [{'path': 'agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {"('CodeActAgent', 'step', 83)": {'mod': [126, 127]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"agenthub/codeact_agent/codeact_agent.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
keras-team | keras | 818c9fadd9cb1748f2b5545e8ef5f141526ec14e | https://github.com/keras-team/keras/issues/19281 | type:feature | Scatter update variable in TF optimizer | In TensorFlow there is a cool (fast) variable update operation - scatter_update (like "assign" for dense variables).
It would be cool if you override assign operation for such cases (i think it should looks like https://github.com/keras-team/keras/blob/master/keras/backend/tensorflow/optimizer.py#L45 )
P.S.
Found such case during migration of Keras v2 custom optimizer. | null | https://github.com/keras-team/keras/pull/19313 | null | {'base_commit': '818c9fadd9cb1748f2b5545e8ef5f141526ec14e', 'files': [{'path': 'keras/backend/tensorflow/optimizer.py', 'status': 'modified', 'Loc': {"('TFOptimizer', None, 8)": {'add': [44]}}}, {'path': 'keras/optimizers/optimizer_sparse_test.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 99]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"keras/backend/tensorflow/optimizer.py",
"keras/optimizers/optimizer_sparse_test.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | d558bce8e9d5d4adfb0ab587be20b8a231dd1eea | https://github.com/pandas-dev/pandas/issues/39636 | Regression
Apply | BUG: ValueError on ".transform" method applied to an empty DataFrame | - [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the latest version of pandas.
- [ ] (optional) I have confirmed this bug exists on the master branch of pandas.
---
#### Code Sample, a copy-pastable example
Output on version 1.1.5:
```python
In [5]: import pandas as pd
...: df = pd.DataFrame([], columns=["id", "field"])
...: df["id"].transform(lambda x: x + 10)
Out[5]: Series([], Name: id, dtype: object)
```
Output on version 1.2.x:
```python
In [4]: import pandas as pd
...: df = pd.DataFrame([], columns=["id", "field"])
...: df["id"].transform(lambda x: x + 10)
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-3-d1e6cad57091> in <module>
----> 1 df["id"].transform(lambda x: x + 10)
~/.pyenv/versions/3.9.1/envs/odds-data-3.9.1/lib/python3.9/site-packages/pandas/core/series.py in transform(self, func, axis, *args, **kwargs)
3975 self, func: AggFuncType, axis: Axis = 0, *args, **kwargs
3976 ) -> FrameOrSeriesUnion:
-> 3977 return transform(self, func, axis, *args, **kwargs)
3978
3979 def apply(self, func, convert_dtype=True, args=(), **kwds):
~/.pyenv/versions/3.9.1/envs/odds-data-3.9.1/lib/python3.9/site-packages/pandas/core/aggregation.py in transform(obj, func, axis, *args, **kwargs)
458 # when the dtype is not appropriate
459 if isinstance(result, (ABCSeries, ABCDataFrame)) and result.empty:
--> 460 raise ValueError("Transform function failed")
461 if not isinstance(result, (ABCSeries, ABCDataFrame)) or not result.index.equals(
462 obj.index
ValueError: Transform function failed
```
#### Problem description
Applying `.transform` on an empty DataFrame raises a `ValueError` on version 1.2.x. This is a change on the behavior of 1.1.5 version that returns the same empty DataFrame (as `.apply` is still doing).
The change that added this error apparently is related to this commit https://github.com/pandas-dev/pandas/pull/35964/commits/7b6ab94720024d6696b19867f5f8f59f79587ff0
#### Expected Output
```python
In [5]: import pandas as pd
...: df = pd.DataFrame([], columns=["id", "field"])
...: df["id"].transform(lambda x: x + 10)
Out[5]: Series([], Name: id, dtype: object)
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : 9d598a5e1eee26df95b3910e3f2934890d062caa
python : 3.9.1.final.0
python-bits : 64
OS : Linux
OS-release : 5.4.0-65-generic
Version : #73-Ubuntu SMP Mon Jan 18 17:25:17 UTC 2021
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : en_US.UTF-8
LOCALE : en_US.UTF-8
pandas : 1.2.1
numpy : 1.20.0
pytz : 2021.1
dateutil : 2.8.1
pip : 20.2.3
setuptools : 49.2.1
Cython : None
pytest : 6.2.2
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.6.2
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : None
IPython : 7.20.0
pandas_datareader: None
bs4 : None
bottleneck : None
fsspec : None
fastparquet : None
gcsfs : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pyxlsb : None
s3fs : None
scipy : 1.6.0
sqlalchemy : 1.3.23
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
numba : None
</details>
| null | https://github.com/pandas-dev/pandas/pull/39639 | null | {'base_commit': 'd558bce8e9d5d4adfb0ab587be20b8a231dd1eea', 'files': [{'path': 'doc/source/whatsnew/v1.2.2.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [23]}}}, {'path': 'pandas/core/aggregation.py', 'status': 'modified', 'Loc': {"(None, 'transform', 404)": {'mod': [460]}}}, {'path': 'pandas/tests/apply/test_frame_transform.py', 'status': 'modified', 'Loc': {"(None, 'test_transform_mixed_column_name_dtypes', 271)": {'add': [276]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [
"pandas/core/aggregation.py"
],
"doc": [
"doc/source/whatsnew/v1.2.2.rst"
],
"test": [
"pandas/tests/apply/test_frame_transform.py"
],
"config": [],
"asset": []
} | 1 |
fastapi | fastapi | 92c825be6a7362099400c9c3fe8b01ea13add3dc | https://github.com/fastapi/fastapi/issues/19 | question
answered
reviewed
question-migrate | accessing the request object | In starlette you can access request object in function decorated with the route decorator.
it seems very handy to be able to access middlewares etc,
is there a way in fastapi to do that using the provided get/post/options.... decorators?
same question for the ApiRouter.
```
@app.route("/notes", methods=["GET"])
async def list_notes(request):
query = notes.select()
results = await request.database.fetchall(query)
```
| null | https://github.com/fastapi/fastapi/pull/25 | null | {'base_commit': '92c825be6a7362099400c9c3fe8b01ea13add3dc', 'files': [{'path': 'docs/tutorial/extra-starlette.md', 'status': 'removed', 'Loc': {}}, {'path': 'mkdocs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [56], 'mod': [61]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "5",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"mkdocs.yml",
"docs/tutorial/extra-starlette.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 9572a2e00ddadb9fc7e2125c3e723b8a3b54be05 | https://github.com/pandas-dev/pandas/issues/33238 | CI/COMPAT: Linux py37_np_dev pipeline timeouts | #### Problem description
Linux py37_np_dev pipeline appears to timeout for everyone after 60 minutes.
There are a couple hundred thousand errors like this:
```
Exception ignored in: 'pandas.io.sas._sas.Parser.process_byte_array_with_data'
DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
DeprecationWarning: tostring() is deprecated. Use tobytes() instead.
```
Here is a [link](https://dev.azure.com/pandas-dev/pandas/_build/results?buildId=32212&view=logs&j=3a03f79d-0b41-5610-1aa4-b4a014d0bc70&t=4d05ed0e-1ed3-5bff-dd63-1e957f2766a9&l=792078) to it failing for me. | null | https://github.com/pandas-dev/pandas/pull/33241 | null | {'base_commit': '9572a2e00ddadb9fc7e2125c3e723b8a3b54be05', 'files': [{'path': 'pandas/_libs/writers.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [115]}}}, {'path': 'pandas/io/sas/sas.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [434]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/io/sas/sas.pyx",
"pandas/_libs/writers.pyx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | 2086ff4065a43fa40d909f81e62623e265df5759 | https://github.com/scrapy/scrapy/issues/2390 | bug | Sitemap spider not robust against wrong sitemap URLs in robots.txt | [The "specs"](http://www.sitemaps.org/protocol.html#submit_robots) do say that the URL should be a "full URL":
> You can specify the location of the Sitemap using a robots.txt file. To do this, simply add the following line including the full URL to the sitemap:
> `Sitemap: http://www.example.com/sitemap.xml`
But some robots.txt use relative ones.
Example: http://www.asos.com/robots.txt
```
User-agent: *
Sitemap: /sitemap.ashx
Sitemap: http://www.asos.com/sitemap.xml
Disallow: /basket/
(...)
```
Spider:
```
from scrapy.spiders import SitemapSpider
class TestSpider(SitemapSpider):
name = "test"
sitemap_urls = [
'http://www.asos.com/robots.txt',
]
def parse(self, response):
self.logger.info('parsing %r' % response.url)
```
Logs:
```
$ scrapy runspider spider.py
Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.90 Safari/537.36'
2016-11-09 17:46:19 [scrapy] INFO: Scrapy 1.2.1 started (bot: scrapybot)
(...)
2016-11-09 17:46:19 [scrapy] DEBUG: Crawled (200) <GET http://www.asos.com/robots.txt> (referer: None)
2016-11-09 17:46:19 [scrapy] ERROR: Spider error processing <GET http://www.asos.com/robots.txt> (referer: None)
Traceback (most recent call last):
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/offsite.py", line 29, in process_spider_output
for x in result:
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spidermiddlewares/depth.py", line 58, in <genexpr>
return (r for r in result or () if _filter(r))
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/spiders/sitemap.py", line 36, in _parse_sitemap
yield Request(url, callback=self._parse_sitemap)
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 25, in __init__
self._set_url(url)
File "/home/paul/.virtualenvs/scrapy12/local/lib/python2.7/site-packages/scrapy/http/request/__init__.py", line 57, in _set_url
raise ValueError('Missing scheme in request url: %s' % self._url)
ValueError: Missing scheme in request url: /sitemap.ashx
2016-11-09 17:46:19 [scrapy] INFO: Closing spider (finished)
2016-11-09 17:46:19 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 291,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 1857,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2016, 11, 9, 16, 46, 19, 332383),
'log_count/DEBUG': 2,
'log_count/ERROR': 1,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'spider_exceptions/ValueError': 1,
'start_time': datetime.datetime(2016, 11, 9, 16, 46, 19, 71714)}
2016-11-09 17:46:19 [scrapy] INFO: Spider closed (finished)
``` | null | https://github.com/scrapy/scrapy/pull/2395 | null | {'base_commit': '2086ff4065a43fa40d909f81e62623e265df5759', 'files': [{'path': 'scrapy/spiders/sitemap.py', 'status': 'modified', 'Loc': {"('SitemapSpider', '_parse_sitemap', 33)": {'mod': [35]}}}, {'path': 'scrapy/utils/sitemap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "(None, 'sitemap_urls_from_robots', 37)": {'mod': [37, 43]}}}, {'path': 'tests/test_spider.py', 'status': 'modified', 'Loc': {"('SitemapSpiderTest', 'test_get_sitemap_urls_from_robotstxt', 331)": {'add': [334], 'mod': [341]}}}, {'path': 'tests/test_utils_sitemap.py', 'status': 'modified', 'Loc': {"('SitemapTest', 'test_sitemap_urls_from_robots', 110)": {'add': [121], 'mod': [127, 128]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/spiders/sitemap.py",
"scrapy/utils/sitemap.py"
],
"doc": [],
"test": [
"tests/test_spider.py",
"tests/test_utils_sitemap.py"
],
"config": [],
"asset": []
} | 1 |
ageitgey | face_recognition | f21631401119e4af2e919dd662c3817b2c480c75 | https://github.com/ageitgey/face_recognition/issues/149 | Tolerance factor not working from cli | * face_recognition version:
* Python version: 3.5
* Operating System: Ubuntu 16
### Description
Hi! I tried to set the tolerance factor in the cli but it doesn't work....It says: "Error: no such option: --tolerance". I am using the preconfigured VM available on Medium Website.
### What I Did
```
face_recognition --tolerance 0.5 ./knwown ./unkwnown
```
Thanks! | null | https://github.com/ageitgey/face_recognition/pull/137 | null | {'base_commit': 'f21631401119e4af2e919dd662c3817b2c480c75', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [132]}}}, {'path': 'face_recognition/cli.py', 'status': 'modified', 'Loc': {"(None, 'test_image', 35)": {'mod': [35, 48]}, "(None, 'process_images_in_process_pool', 60)": {'mod': [60, 72]}, "(None, 'main', 81)": {'mod': [81, 91, 93]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [28]}}}, {'path': 'tests/test_face_recognition.py', 'status': 'modified', 'Loc': {"('Test_face_recognition', 'test_command_line_interface_options', 185)": {'mod': [186]}, "('Test_face_recognition', 'test_command_line_interface', 192)": {'mod': [198, 200, 201]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"face_recognition/cli.py",
"setup.py"
],
"doc": [
"README.md"
],
"test": [
"tests/test_face_recognition.py"
],
"config": [],
"asset": []
} | 1 | |
oobabooga | text-generation-webui | 9ab90d8b608170fe57d893c2150eda3bc11a8b06 | https://github.com/oobabooga/text-generation-webui/issues/2435 | bug | Failed to load embedding model: all-mpnet-base-v2 While Running Textgen in Colab Notebook | ### Describe the bug
I have used this command instead of using old Cuda in my ipynb
`!git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa`
Now, I ran the server using following code -
`!python server.py --extensions openai --model guanaco-7B-GPTQ --model_type LLaMa --api --public-api --share --wbits 4 --groupsize 128`
I am getting below error -
```
WARNING:The gradio "share link" feature uses a proprietary executable to create a reverse tunnel. Use it with care.
2023-05-30 11:21:05.243240: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
INFO:Loading guanaco-7B-GPTQ...
INFO:Found the following quantized model: models/guanaco-7B-GPTQ/Guanaco-7B-GPTQ-4bit-128g.no-act-order.safetensors
INFO:Loaded the model in 14.96 seconds.
INFO:Loading the extension "openai"...
Failed to load embedding model: all-mpnet-base-v2
```
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
Run Colab.
Use this notebook. [Colab](https://colab.research.google.com/drive/1wURKtZgM_SWhjy-NlHNVjHl-SKT5AwtF?usp=sharing)
Openai Extension not working as intended
### Screenshot
_No response_
### Logs
```shell
WARNING:The gradio "share link" feature uses a proprietary executable to create a reverse tunnel. Use it with care.
2023-05-30 11:21:05.243240: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT
bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda118.so
INFO:Loading guanaco-7B-GPTQ...
INFO:Found the following quantized model: models/guanaco-7B-GPTQ/Guanaco-7B-GPTQ-4bit-128g.no-act-order.safetensors
INFO:Loaded the model in 14.96 seconds.
INFO:Loading the extension "openai"...
Failed to load embedding model: all-mpnet-base-v2
```
### System Info
```shell
Google COlab Notebook with T4 GPU
```
| null | https://github.com/oobabooga/text-generation-webui/pull/2443 | null | {'base_commit': '9ab90d8b608170fe57d893c2150eda3bc11a8b06', 'files': [{'path': 'extensions/openai/script.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20]}, "('Handler', 'do_POST', 159)": {'mod': [197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 234, 235, 236, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"extensions/openai/script.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
Textualize | rich | ef1b9b91ccff680b7f931d75fd92c3caa6fcd622 | https://github.com/Textualize/rich/issues/2083 | Needs triage | [BUG] typing: Progress in Group isn't happy | **Describe the bug**
Running mypy on the following code:
```python
from rich.console import Group
from rich.progress import Progress
outer_progress = Progress()
inner_progress = Progress()
live_group = Group(outer_progress, inner_progress)
```
Produces:
```console
$ mypy --strict tmp.py
tmp.py:6: error: Argument 1 to "Group" has incompatible type "Progress"; expected "Union[ConsoleRenderable, RichCast, str]"
tmp.py:6: note: Following member(s) of "Progress" have conflicts:
tmp.py:6: note: Expected:
tmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, str]
tmp.py:6: note: Got:
tmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, RichCast, str]
tmp.py:6: error: Argument 2 to "Group" has incompatible type "Progress"; expected "Union[ConsoleRenderable, RichCast, str]"
tmp.py:6: note: Expected:
tmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, str]
tmp.py:6: note: Got:
tmp.py:6: note: def __rich__(self) -> Union[ConsoleRenderable, RichCast, str]
Found 2 errors in 1 file (checked 1 source file)
```
I think `RichCast` should also be in the Protocol, that is, `__rich__` is allowed to return an object with `__rich__`, ~~or it should not be in `__rich__`, that is, `__rich__(self) -> Union[ConsoleRenderable, str]` should be used for all `__rich__` methods. Which is correct depends on runtime; can a `__rich__` return a `__rich__` which can return a `__rich__`, etc?~~. Ahah, I see `CHANGELOG.md:167:- Allowed `__rich__` to work recursively`, so it's the former.
I'm preparing a PR.
**Platform**
<details>
<summary>Click to expand</summary>
What platform (Win/Linux/Mac) are you running on? What terminal software are you using?
I may ask you to copy and paste the output of the following commands. It may save some time if you do it now.
If you're using Rich in a terminal:
```
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── <class 'rich.console.Console'> ──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│ A high level console interface. │
│ │
│ ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=298 ColorSystem.TRUECOLOR> │ │
│ ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = 'truecolor' │
│ encoding = 'utf-8' │
│ file = <_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'> │
│ height = 68 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = True │
│ is_jupyter = False │
│ is_terminal = True │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions(size=ConsoleDimensions(width=298, height=68), legacy_windows=False, min_width=1, max_width=298, is_terminal=True, encoding='utf-8', max_height=68, justify=None, overflow=None, no_wrap=False, highlight=None, markup=None, height=None) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=298, height=68) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 298 │
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭──────────────────────────────────────────────────────────────────────────────────────────── Environment Variables ────────────────────────────────────────────────────────────────────────────────────────────╮
│ {'TERM': 'xterm-256color', 'COLORTERM': 'truecolor', 'CLICOLOR': None, 'NO_COLOR': None, 'TERM_PROGRAM': 'iTerm.app', 'COLUMNS': None, 'LINES': None, 'JPY_PARENT_PID': None, 'VSCODE_VERBOSE_LOGGING': None} │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
platform="Darwin"
rich==11.2.0
```
(Same issue after upgrading to Rich 12)
</details>
| null | https://github.com/Textualize/rich/pull/2089 | null | {'base_commit': 'ef1b9b91ccff680b7f931d75fd92c3caa6fcd622', 'files': [{'path': 'rich/console.py', 'status': 'modified', 'Loc': {"('RichCast', None, 265)": {'mod': [268]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"rich/console.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
nvbn | thefuck | 3da26192cba7dbaa3109fc0454e658ec417aaf5f | https://github.com/nvbn/thefuck/issues/89 | feature request: replace history with corrected command. | It would be a nice feature to correct the command and the history.
I would also like an option to not add {fuck,thefuck} to the history.
| null | https://github.com/nvbn/thefuck/pull/384 | null | {'base_commit': '3da26192cba7dbaa3109fc0454e658ec417aaf5f', 'files': [{'path': 'thefuck/shells.py', 'status': 'modified', 'Loc': {"('Fish', 'app_alias', 128)": {'mod': [129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"thefuck/shells.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 61e722aa126207efcdbc1ddcd4453854ad44ea09 | https://github.com/scikit-learn/scikit-learn/issues/10251 | Extending Criterion | Unless I'm missing something, it's not completely trivial how one can use a custom `sklearn.tree._criterion.Criterion` for a decision tree. See my use case [here](https://stats.stackexchange.com/q/316954/98500).
Things I have tried include:
- Import the `ClassificationCriterion` in Python and subclass it. It seems that `node_impurity` and `children_impurity` do not get called, the impurity is always 0 (perhaps because they are `cdef` and not `cpdef`?). I'm also unsure what the parameters to `__new__` / `__cinit__` should be (e.g. `1` and `np.array([2], dtype='intp')` for a binary classification problem?), or how to pass them properly: I have to create the `Criterion` object from outside the tree to circumvent [the check on the `criterion` argument](https://github.com/scikit-learn/scikit-learn/blob/a24c8b464d094d2c468a16ea9f8bf8d42d949f84/sklearn/tree/tree.py#L324).
- Extend `ClassificationCriterion` in a Cython file. This seems to work, but (a) it requires exporting `ClassificationCriterion` from `_criterion.pxd` and (b) it would be nice if it would be documented more extensively what should be done in `node_impurity` and `children_impurity`. I will post my code below once it seems to work correctly.
May I propose one of the following to make this easier?
- Document what should be done to extend the class in Cython or Python - if Python should be allowed: I am aware of the performance issue with that, but in some cases it may be OK to do this in Python - I don't know.
- Make it possible to pass a function or other object not extending `Criterion` to the tree, similar to how it is very easy to implement a custom scorer for validation functions. That would require changing the checks [here](https://github.com/scikit-learn/scikit-learn/blob/a24c8b464d094d2c468a16ea9f8bf8d42d949f84/sklearn/tree/tree.py#L324). | null | https://github.com/scikit-learn/scikit-learn/pull/10325 | null | {'base_commit': '61e722aa126207efcdbc1ddcd4453854ad44ea09', 'files': [{'path': 'sklearn/tree/_criterion.pxd', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [67]}}}, {'path': 'sklearn/tree/_criterion.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [215, 216, 707]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/tree/_criterion.pxd",
"sklearn/tree/_criterion.pyx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | 3d19272be75fe32edd4cf01cb2eeac2281305e42 | https://github.com/scikit-learn/scikit-learn/issues/27682 | good first issue
cython | MAINT Directly `cimport` interfaces from `std::algorithm` | Some Cython implementations use interfaces from the standard library of C++, namely `std::algorithm::move` and `std::algorithm::fill` from [`std::algorithm`](https://en.cppreference.com/w/cpp/algorithm/).
Before Cython 3, those interfaces had to be imported directly using the verbose syntax from Cython:
- https://github.com/scikit-learn/scikit-learn/blob/5fc67aeb092d636895b599921283221a68c7a2ad/sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp#L22-L26
- https://github.com/scikit-learn/scikit-learn/blob/5fc67aeb092d636895b599921283221a68c7a2ad/sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp#L28-L33
Cython 3 introduced the following line natively, for those interfaces. Those interfaces should now be `cimported` directly. That is one can replace the line shown above respectively with:
```cython
from libcpp.algorithm cimport move
from libcpp.algorithm cimport fill
```
I believe this is a good first Cython issue.
Any reader should feel free to pick it up. It might be possible that there is some context missing.
Please let me know if you need help. :slightly_smiling_face: | null | https://github.com/scikit-learn/scikit-learn/pull/28489 | null | {'base_commit': '3d19272be75fe32edd4cf01cb2eeac2281305e42', 'files': [{'path': 'sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp', 'status': 'modified', 'Loc': {'(None, None, 16)': {'add': [16]}, '(None, None, 28)': {'mod': [28, 29, 30, 31, 32, 33]}}}, {'path': 'sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp', 'status': 'modified', 'Loc': {'(None, None, 6)': {'add': [6]}, '(None, None, 22)': {'mod': [22, 23, 24, 25, 26]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/metrics/_pairwise_distances_reduction/_radius_neighbors.pyx.tp",
"sklearn/metrics/_pairwise_distances_reduction/_middle_term_computer.pyx.tp"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null |
fastapi | fastapi | 033bc2a6c9aec3a245eb1f1b4fadb2fbb7a514b8 | https://github.com/fastapi/fastapi/issues/429 | bug
reviewed | OpenAPI: HTTP_422 response does not use custom media_type | **Describe the bug**
FastAPI automatically adds an HTTP_422 response to all paths in the OpenAPI specification that have parameters or request body. This response does not use the media_type of response_class if any custom defined. Furthermore, it overwrites any error object format with the default one.
**To Reproduce**
Create a path with parameters and add custom response_class to decorator. Add custom exception handlers that reformat the default error responses as per your liking. Then observe generated openapi.json
```python
from fastapi import FastAPI, HTTPException
from fastapi.exceptions import RequestValidationError
from starlette import status
from starlette.responses import JSONResponse
from . import schemas
app = FastAPI()
class JsonApiResponse(JSONResponse):
media_type = 'application/vnd+json.api'
@app.exception_handler(HTTPException)
async def http_exception_handler(request, exc: HTTPException) -> JsonApiResponse:
headers = getattr(exc, "headers", None)
content = schemas.ErrorResponse(errors=[dict(title="Bad request", detail=exc.detail, status=exc.status_code)]).dict()
status_code = exc.status_code
if headers:
return JsonApiResponse(content=content, status_code=status_code, headers=headers)
else:
return JsonApiResponse(content=content, status_code=status_code)
@app.exception_handler(RequestValidationError)
async def request_validation_exception_handler(request, exc: RequestValidationError) -> JsonApiResponse:
http422 = status.HTTP_422_UNPROCESSABLE_ENTITY
return JsonApiResponse(
content=schemas.ErrorResponse(errors=[
dict(title=err['type'], detail=err['msg'], source='/'.join(err['loc']), status=http422)
for err in exc.errors()
]).dict(),
status_code=http422,
)
@app.post('/customers',
status_code=status.HTTP_201_CREATED,
response_model=schemas.CustomerDetailsResponse,
response_class=JsonApiResponse,
)
def customer_create(data: schemas.Customer = Body(..., media_type='application/vnd+json.api', embed=True)):
created_customer = {**data.dict(), **{'id': '1'}}
return {'data': created_customer}
```
The openapi.json will include the unwanted 422 response with the FastAPI default error object definitions:
```yaml
# ...
'422':
description: Validation Error
content:
application/json:
schema:
"$ref": "#/components/schemas/HTTPValidationError"
```
**Expected behavior**
At least, the media_type of the response_class should be respected. But the best would be if the 422 would not be added to the specification unless requested via the path decorator. Or if the 422 definitions of mine were respected.
```python
@app.post('/customers',
status_code=status.HTTP_201_CREATED,
response_model=schemas.CustomerDetailsResponse,
response_class=JsonApiResponse,
responses={
422: {
'model': schemas.ErrorResponse
},
})
data: schemas.Customer = Body(..., media_type='application/vnd+json.api', embed=True)):
pass
```
**Environment:**
- OS: masOS 10.14.6
- Python: 3.6.5
- FastAPI: 0.35.0 | null | https://github.com/fastapi/fastapi/pull/437 | null | {'base_commit': '033bc2a6c9aec3a245eb1f1b4fadb2fbb7a514b8', 'files': [{'path': 'fastapi/openapi/utils.py', 'status': 'modified', 'Loc': {"(None, 'get_openapi_path', 142)": {'add': [227], 'mod': [162, 163, 164, 165, 175, 176, 177, 178, 179, 191, 219, 220]}, "(None, 'get_openapi_operation_parameters', 72)": {'mod': [74, 75, 80, 81, 82, 94]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"fastapi/openapi/utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | d692a72bf3809df35d802041211fcd81d56b1dc6 | https://github.com/All-Hands-AI/OpenHands/issues/710 | enhancement
severity:low | Tune rate-limit backoff | **What problem or use case are you trying to solve?**
Due to the AnthropicException error, which indicates that the request limit has been reached, it is necessary to increase the interval between requests. This will prevent system overload and provide a stable service.
**Describe the UX of the solution you'd like**
From a user experience (UX) perspective, the most important aspect is to send requests at an appropriate interval. Sending requests too frequently will cause errors, while sending requests at too long an interval will result in longer response times. Therefore, finding the right balance is crucial. Additionally, informing users about the current status and estimated wait time would also contribute to a good UX.
**Do you have thoughts on the technical implementation?**
From a technical implementation standpoint, a mechanism to monitor and manage request limits is required. For example, tracking the number of requests and the time they were made, and stopping requests for a certain period of time once the limit is reached. Additionally, implementing an algorithm to dynamically adjust the request interval could be more efficient.
**Additional context**
An additional consideration is the error handling mechanism. When a request limit error occurs, appropriate exception handling and retry logic should be implemented. Additionally, through logging and monitoring systems, the system's status should be continuously monitored, and issues should be promptly detected. | null | https://github.com/All-Hands-AI/OpenHands/pull/1120 | null | {'base_commit': 'd692a72bf3809df35d802041211fcd81d56b1dc6', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [179]}}}, {'path': 'agenthub/monologue_agent/utils/memory.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 7, 12]}}}, {'path': 'agenthub/monologue_agent/utils/monologue.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6], 'mod': [1]}, "('Monologue', 'get_total_length', 44)": {'mod': [56]}, "('Monologue', 'condense', 59)": {'mod': [67, 77, 78]}}}, {'path': 'opendevin/config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 42, 49], 'mod': [23, 24]}}}, {'path': 'opendevin/controller/agent_controller.py', 'status': 'modified', 'Loc': {"('AgentController', 'step', 154)": {'add': [175], 'mod': [173, 181, 182, 185, 186, 188, 189, 191]}, '(None, None, None)': {'mod': [2, 6, 7]}}}, {'path': 'opendevin/llm/llm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15], 'mod': [3, 4, 8, 13, 14]}, "('LLM', None, 18)": {'add': [18]}, "('LLM', '__init__', 19)": {'add': [25], 'mod': [23, 24, 27, 38, 39, 40, 41, 42, 46]}}}, {'path': 'opendevin/schema/config.py', 'status': 'modified', 'Loc': {"('ConfigType', None, 4)": {'mod': [17]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"agenthub/monologue_agent/utils/memory.py",
"opendevin/schema/config.py",
"opendevin/llm/llm.py",
"agenthub/monologue_agent/utils/monologue.py",
"opendevin/config.py",
"opendevin/controller/agent_controller.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | d16396138e8a61f9bc2c3c36ae8c4d7420d23782 | https://github.com/AntonOsika/gpt-engineer/issues/663 | enhancement
sweep | Sweep: Bump the release version in pyproject.toml |
<details open>
<summary>Checklist</summary>
- [X] `pyproject.toml`
> • Locate the line where the version number is specified. It should be under the [project] section and the line should start with "version = ".
> • Determine the new version number according to the semantic versioning rules. If only minor changes or bug fixes have been made, increment the patch version. If new features have been added in a backwards-compatible manner, increment the minor version. If changes have been made that are not backwards-compatible, increment the major version.
> • Update the version number in the pyproject.toml file. Replace the old version number with the new version number.
> • Check if there are any dependencies or other parts of the project that rely on the version number. If there are, update these parts of the project as well.
> • Commit the changes and push to the repository.
</details>
| null | https://github.com/AntonOsika/gpt-engineer/pull/666 | null | {'base_commit': 'd16396138e8a61f9bc2c3c36ae8c4d7420d23782', 'files': [{'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"pyproject.toml"
],
"asset": []
} | 1 |
scrapy | scrapy | e748ca50ca3e83ac703e02538a27236fedd53a7d | https://github.com/scrapy/scrapy/issues/728 | bug | get_func_args maximum recursion | https://github.com/scrapy/scrapy/blob/master/scrapy/utils/python.py#L149
Today I was working on a project were I have to skip the first item of a list, and then join the rest. Instead of writing the typical slice I tried something much more good looking `Compose(itemgetter(slice(1, None)), Join())` but I found out this maximum recursion. I did some research and ask @dangra about it, but nothing came up.
I think the main problem is that `inspect` isn't able recognize `itemgetter` as `something`.
``` python
>>> inspect.getmembers(itemgetter(2))
[('__call__',
<method-wrapper '__call__' of operator.itemgetter object at 0x7f79aeffb990>),
('__class__', <type 'operator.itemgetter'>),
('__delattr__',
<method-wrapper '__delattr__' of operator.itemgetter object at 0x7f79aeffb990>),
('__doc__',
'itemgetter(item, ...) --> itemgetter object\n\nReturn a callable object that fetches the given item(s) from its operand.\nAfter, f=itemgetter(2), the call f(r) returns r[2].\nAfter, g=itemgetter(2,5,3), the call g(r) returns (r[2], r[5], r[3])'),
('__format__',
<built-in method __format__ of operator.itemgetter object at 0x7f79aeffb990>),
('__getattribute__',
<method-wrapper '__getattribute__' of operator.itemgetter object at 0x7f79aeffb990>),
('__hash__',
<method-wrapper '__hash__' of operator.itemgetter object at 0x7f79aeffb990>),
('__init__',
<method-wrapper '__init__' of operator.itemgetter object at 0x7f79aeffb990>),
('__new__', <built-in method __new__ of type object at 0x8c1ec0>),
('__reduce__',
<built-in method __reduce__ of operator.itemgetter object at 0x7f79aeffb990>),
('__reduce_ex__',
<built-in method __reduce_ex__ of operator.itemgetter object at 0x7f79aeffb990>),
('__repr__',
<method-wrapper '__repr__' of operator.itemgetter object at 0x7f79aeffb990>),
('__setattr__',
<method-wrapper '__setattr__' of operator.itemgetter object at 0x7f79aeffb990>),
('__sizeof__',
<built-in method __sizeof__ of operator.itemgetter object at 0x7f79aeffb990>),
('__str__',
<method-wrapper '__str__' of operator.itemgetter object at 0x7f79aeffb990>),
('__subclasshook__',
<built-in method __subclasshook__ of type object at 0x8c1ec0>)]
>>> inspect.getargspec(itemgetter(2).__call__)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/lib/python2.7/inspect.py", line 815, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <method-wrapper '__call__' of operator.itemgetter object at 0xb3ddd0> is not a Python function
>>> inspect.getargspec(itemgetter(slice(None, 2)).__init__)
Traceback (most recent call last):
File "<console>", line 1, in <module>
File "/usr/lib/python2.7/inspect.py", line 815, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <method-wrapper '__init__' of operator.itemgetter object at 0xb3de10> is not a Python function
```
EDIT: Looks like the reason was C functions weren't covered by inspect module until Python 3.4 (http://bugs.python.org/issue17481)
| null | https://github.com/scrapy/scrapy/pull/809 | null | {'base_commit': 'e748ca50ca3e83ac703e02538a27236fedd53a7d', 'files': [{'path': 'scrapy/tests/test_utils_python.py', 'status': 'modified', 'Loc': {"('UtilsPythonTestCase', 'test_get_func_args', 158)": {'add': [195]}}}, {'path': 'scrapy/utils/python.py', 'status': 'modified', 'Loc': {"(None, 'get_func_args', 134)": {'add': [149]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"scrapy/utils/python.py"
],
"doc": [],
"test": [
"scrapy/tests/test_utils_python.py"
],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | 626a0a01471accc32ded29ccca3ed93c4995fcd6 | https://github.com/huggingface/transformers/issues/9954 | TensorFlow
Tests
Good First Issue | [Good first issue] LXMERT TensorFlow Integration tests | The TensorFlow implementation of the LXMERT model currently has no integration tests. This is problematic as the behavior can diverge without being noticed.
The [test_modeling_tf_lxmert.py](https://github.com/huggingface/transformers/blob/master/tests/test_modeling_tf_lxmert.py) file should be updated to include integration testing.
An example of a good modeling integration test is visible in the [test_modeling_tf_bert.py#L365-L387](https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387) file:
https://github.com/huggingface/transformers/blob/1809de5165804666ba6c6a02a9d177f6683869cc/tests/test_modeling_tf_bert.py#L365-L387
Some additional tips:
- The test must be marked as slow using the `@slow` decorator, so as to be run *daily*, and not on every commit of every branch/pull request of this repository.
- The test must be decorated with the `@require_tf` decorator so as to only be run in environments using PyTorch.
- A single test is necessary. If you feel like implementing multiple of these, then sharing the same checkpoint would be ideal so as to reduce download time. | null | https://github.com/huggingface/transformers/pull/12497 | null | {'base_commit': '626a0a01471accc32ded29ccca3ed93c4995fcd6', 'files': [{'path': 'tests/test_modeling_tf_lxmert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, "('TFLxmertModelTest', 'test_saved_model_creation_extended', 710)": {'add': [770]}, "('TFLxmertModelTest', 'test_pt_tf_model_equivalence', 487)": {'mod': [558]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [
"tests/test_modeling_tf_lxmert.py"
],
"config": [],
"asset": []
} | null |
pandas-dev | pandas | 710df2140555030e4d86e669d6df2deb852bcaf5 | https://github.com/pandas-dev/pandas/issues/24115 | Bug
Datetime
Algos | DTA/TDA/PA inplace methods should actually be inplace | At the moment we are using the implementations designed for Index subclasses, which return new objects. | null | https://github.com/pandas-dev/pandas/pull/30505 | null | {'base_commit': '710df2140555030e4d86e669d6df2deb852bcaf5', 'files': [{'path': 'doc/source/whatsnew/v1.0.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [719]}}}, {'path': 'pandas/core/arrays/datetimelike.py', 'status': 'modified', 'Loc': {"('DatetimeLikeArrayMixin', None, 316)": {'mod': [1314]}, "('DatetimeLikeArrayMixin', '__iadd__', 1315)": {'mod': [1316, 1317]}, "('DatetimeLikeArrayMixin', '__isub__', 1319)": {'mod': [1320, 1321]}}}, {'path': 'pandas/tests/arrays/test_datetimelike.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [227]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [
"pandas/core/arrays/datetimelike.py"
],
"doc": [
"doc/source/whatsnew/v1.0.0.rst"
],
"test": [
"pandas/tests/arrays/test_datetimelike.py"
],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 0092ac9a2a20873c7c077cefc4d68397a6df2ada | https://github.com/3b1b/manim/issues/30 | TypeError while running a triangle.py scene | I got an error when I try to run some of the [old_projects/triangle_of_power/triangle.py](https://github.com/3b1b/manim/blob/master/old_projects/triangle_of_power/triangle.py) scene.
My command is:
```
python extract_scene.py -p old_projects/triangle_of_power/triangle.py DrawInsideTriangle
```
But after that I get:
```
Traceback (most recent call last):
File "extract_scene.py", line 187, in main
handle_scene(SceneClass(**scene_kwargs), **config)
File "/home/loic/Sources/Git/manim/scene/scene.py", line 47, in __init__
self.construct(*self.construct_args)
File "/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py", line 527, in construct
top = TOP()
File "/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py", line 91, in __init__
VMobject.__init__(self, **kwargs)
File "/home/loic/Sources/Git/manim/mobject/mobject.py", line 33, in __init__
self.generate_points()
File "/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py", line 104, in generate_points
self.set_values(self.x, self.y, self.z)
File "/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py", line 108, in set_values
self.set_value(i, mob)
File "/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py", line 111, in set_value
self.values[index] = self.put_on_vertex(index, value)
File "/home/loic/Sources/Git/manim/./old_projects/triangle_of_power/triangle.py", line 125, in put_on_vertex
value.center()
File "/home/loic/Sources/Git/manim/mobject/mobject.py", line 230, in center
self.shift(-self.get_center())
File "/home/loic/Sources/Git/manim/mobject/mobject.py", line 124, in shift
mob.points += total_vector
TypeError: Cannot cast ufunc add output from dtype('float64') to dtype('int64') with casting rule 'same_kind'
```
And then the fail sound.
Is there something wrong in what am I doing? | null | https://github.com/3b1b/manim/pull/31 | null | {'base_commit': '0092ac9a2a20873c7c077cefc4d68397a6df2ada', 'files': [{'path': 'mobject/mobject.py', 'status': 'modified', 'Loc': {"('Mobject', 'shift', 121)": {'mod': [123]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"mobject/mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
geekan | MetaGPT | 5cae13fd0a9b6e5a6f3f39c798cf693675795d89 | https://github.com/geekan/MetaGPT/issues/733 | LLM may generate comments inside [CONTENT][/CONTENT] , which causes parsing the JSON to fail. | **Bug description**
```
parse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting ',' delimiter: line 6 column 27 (char 135)
```
**Bug solved method**
<!-- If you solved the bug, describe the idea or process to solve the current bug. Of course, you can also paste the URL address of your Pull Request. -->
<!-- If not, provide more auxiliary information to facilitate our further positioning and investigation -->
Perhaps we could consider adding a constraint to the prompt, indicating not to generate comments inside [CONTENT][/CONTENT], or alternatively, we could trim the comments from the LLM's output.
**Environment information**
<!-- Environment:System version (like ubuntu 22.04), Python version (conda python 3.7), LLM type and model (OpenAI gpt-4-1106-preview) -->
- LLM type and model name: OPENAI gpt-4-1106-preview
- System version: macos 12.5.1
- Python version: python 3.9
<!-- Dependent packagess:the packages version cause the bug(like `pydantic 1.10.8`), installation method(like `pip install metagpt` or `pip install from source` or `run in docker`) -->
- packages version: metagpt commit 82a5eec72707dee44174eae8f8ff1490a6819ecd
- installation method: pip install from source
**Screenshots or logs**
<!-- Screenshots or logs of the bug can help us understand the problem more quickly -->
```
[CONTENT]
{
"Required Python packages": [
"numpy==1.21.2",
"Kivy==2.0.0",
"pygame==2.0.1",
"sqlite3==2.6.0" # sqlite3 is included in Python's standard library, but versioning is for consistency
],
"Required Other language third-party packages": [
"No third-party dependencies required"
],
"Logic Analysis": [
[
"game.py",
"Contains Game class with core game logic, uses numpy for array manipulation, and interacts with UI and Storage classes"
],
[
"main.py",
"Contains main function, initializes the game by calling start_new_game() from Game class"
],
[
"ui.py",
"Contains UI class for user interface, uses Kivy for rendering, and interacts with Game class"
],
[
"storage.py",
"Contains Storage class for saving and loading high scores using SQLite"
]
],
"Task list": [
"storage.py",
"game.py",
"ui.py",
"main.py"
],
"Full API spec": "",
"Shared Knowledge": "'game.py' contains the Game class which is central to the game logic and is used by both 'ui.py' for rendering the game state and 'storage.py' for saving the high score.",
"Anything UNCLEAR": "The monetization strategy for the game is not specified. Will the game include ads, in-app purchases, or be a paid app? This will affect the design of the user interface and potentially the choice of libraries or frameworks."
}
[/CONTENT]
2024-01-10 14:58:53.419 | INFO | metagpt.utils.cost_manager:update_cost:48 - Total running cost: $0.199 | Max budget: $10.000 | Current cost: $0.021, prompt_tokens: 1021, completion_tokens: 352
2024-01-10 14:58:53.423 | WARNING | metagpt.utils.repair_llm_raw_output:run_and_passon:235 - parse json from content inside [CONTENT][/CONTENT] failed at retry 1, exp: Expecting ',' delimiter: line 6 column 27 (char 135)
2024-01-10 14:58:53.424 | INFO | metagpt.utils.repair_llm_raw_output:repair_invalid_json:204 - repair_invalid_json, raw error: Expecting ',' delimiter: line 6 column 27 (char 135)
2024-01-10 14:58:53.424 | ERROR | metagpt.utils.common:log_it:438 - Finished call to 'metagpt.actions.action_node.ActionNode._aask_v1' after 222.144(s), this was the 6th time calling it. exp: RetryError[<Future at 0x7fb828c8d340 state=finished raised JSONDecodeError>]
2024-01-10 14:58:53.424 | WARNING | metagpt.utils.common:wrapper:510 - There is a exception in role's execution, in order to resume, we delete the newest role communication message in the role's memory.
2024-01-10 14:58:53.430 | ERROR | metagpt.utils.common:wrapper:492 - Exception occurs, start to serialize the project, exp:
```
| null | https://github.com/geekan/MetaGPT/pull/963 | null | {'base_commit': '5cae13fd0a9b6e5a6f3f39c798cf693675795d89', 'files': [{'path': 'config/config2.example.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15], 'mod': [6]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"config/config2.example.yaml"
],
"asset": []
} | 1 | |
huggingface | transformers | da1d0d404f05523d37b37207a4c1ff419cc1f47f | https://github.com/huggingface/transformers/issues/26809 | Feature request | Add Mistral Models to Flax | ### Feature request
I would like to implement the ~~Llama~~ Mistral model in flax
### Motivation
I've been trying to get familiar with jax and as such I started migrating the llama model, and I think I am at a point where both models are quite comparable in outcome
### Your contribution
Yes I could submit a PR with the model implementation | null | https://github.com/huggingface/transformers/pull/24587 | null | {'base_commit': 'da1d0d404f05523d37b37207a4c1ff419cc1f47f', 'files': [{'path': 'docs/source/en/index.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [97, 170, 171]}}}, {'path': 'docs/source/en/model_doc/llama.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [52, 114]}}}, {'path': 'src/transformers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4556, 8633]}}}, {'path': 'src/transformers/modeling_flax_utils.py', 'status': 'modified', 'Loc': {"(None, 'append_call_sample_docstring', 1270)": {'add': [1277], 'mod': [1270]}}}, {'path': 'src/transformers/models/auto/modeling_flax_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45, 148]}}}, {'path': 'src/transformers/models/bloom/modeling_bloom.py', 'status': 'modified', 'Loc': {"('BloomPreTrainedModel', '_convert_to_bloom_cache', 491)": {'mod': [492]}}}, {'path': 'src/transformers/models/fuyu/image_processing_fuyu.py', 'status': 'modified', 'Loc': {"(None, 'make_list_of_list_of_images', 56)": {'mod': [57]}}}, {'path': 'src/transformers/models/llama/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18, 57, 85]}}}, {'path': 'src/transformers/models/mpt/modeling_mpt.py', 'status': 'modified', 'Loc': {"('MptPreTrainedModel', '_convert_to_mpt_cache', 267)": {'mod': [268]}}}, {'path': 'src/transformers/utils/dummy_flax_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [802]}}}, {'path': 'tests/models/llama/test_modeling_llama.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [36]}, "('LlamaModelTester', 'prepare_config_and_inputs', 103)": {'mod': [108]}}}, {'path': 'tests/models/mistral/test_modeling_mistral.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [37]}, "('MistralModelTester', 'prepare_config_and_inputs', 105)": {'mod': [110]}}}, {'path': 'tests/models/persimmon/test_modeling_persimmon.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [35]}, "('PersimmonModelTester', 'prepare_config_and_inputs', 102)": {'mod': [107]}}}, {'path': 'tests/models/phi/test_modeling_phi.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [41]}}}, {'path': 'utils/check_docstrings.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [235]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"utils/check_docstrings.py",
"src/transformers/__init__.py",
"src/transformers/utils/dummy_flax_objects.py",
"src/transformers/modeling_flax_utils.py",
"src/transformers/models/mpt/modeling_mpt.py",
"src/transformers/models/bloom/modeling_bloom.py",
"src/transformers/models/fuyu/image_processing_fuyu.py",
"src/transformers/models/auto/modeling_flax_auto.py",
"src/transformers/models/llama/__init__.py"
],
"doc": [
"docs/source/en/model_doc/llama.md",
"docs/source/en/index.md"
],
"test": [
"tests/models/mistral/test_modeling_mistral.py",
"tests/models/phi/test_modeling_phi.py",
"tests/models/persimmon/test_modeling_persimmon.py",
"tests/models/llama/test_modeling_llama.py"
],
"config": [],
"asset": []
} | 1 |
python | cpython | 0aa58fa7a62cd0ee7ec27fa87122425aeff0467d | https://github.com/python/cpython/issues/91043 | build
3.11 | ./Programs/_freeze_module fails with MSAN: Uninitialized value was created by an allocation of 'stat.i' | BPO | [46887](https://bugs.python.org/issue46887)
--- | :---
Nosy | @vstinner
PRs | <li>python/cpython#31633</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2022-03-01.09:38:30.397>
labels = ['build', '3.11']
title = "./Programs/_freeze_module fails with MSAN: Uninitialized value was created by an allocation of 'stat.i'"
updated_at = <Date 2022-03-01.15:01:07.589>
user = 'https://github.com/vstinner'
```
bugs.python.org fields:
```python
activity = <Date 2022-03-01.15:01:07.589>
actor = 'vstinner'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Build']
creation = <Date 2022-03-01.09:38:30.397>
creator = 'vstinner'
dependencies = []
files = []
hgrepos = []
issue_num = 46887
keywords = ['patch']
message_count = 6.0
messages = ['414249', '414264', '414267', '414268', '414269', '414271']
nosy_count = 1.0
nosy_names = ['vstinner']
pr_nums = ['31633']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = None
url = 'https://bugs.python.org/issue46887'
versions = ['Python 3.11']
```
</p></details>
| null | https://github.com/python/cpython/pull/102510 | null | {'base_commit': '0aa58fa7a62cd0ee7ec27fa87122425aeff0467d', 'files': [{'path': 'Objects/longobject.c', 'status': 'modified', 'Loc': {'(None, None, 140)': {'add': [165]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"Objects/longobject.c"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 0b74c72e1c7fe320440fa97a3d256107ea329307 | https://github.com/pandas-dev/pandas/issues/6403 | Bug
IO Excel | ExcelFile parse of empty sheet fails with "IndexError: list index out of range" | Using pandas 0.13.1 on OS X Mavericks to parse a blank Excel spreadsheet causes "IndexError: list index out of range". Apparently the default header=0 in `_parse_excel` causes the execution of `_trim_excel_header(data[header])`. Perhaps when nrows==0 this should not be executed.
``` python
import pandas as pd
xl_file = pd.ExcelFile('blank.xlsx')
xl_file.parse('Sheet1') #Sheet1 has no data
```
STDERR:
```
Traceback (most recent call last):
File "/Users/myourshaw/lab/pypeline/python2/excel_example.py", line 10, in <module>
xl_file.parse('Sheet1')
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/io/excel.py", line 208, in parse
**kwds)
File "/usr/local/Cellar/python/2.7.6/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pandas/io/excel.py", line 291, in _parse_excel
data[header] = _trim_excel_header(data[header])
IndexError: list index out of range
```
| null | https://github.com/pandas-dev/pandas/pull/10376 | null | {'base_commit': '0b74c72e1c7fe320440fa97a3d256107ea329307', 'files': [{'path': 'ci/requirements-3.4.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}}}, {'path': 'ci/requirements-3.4_SLOW.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}}}, {'path': 'doc/source/install.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [252, 255]}}}, {'path': 'doc/source/io.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2184], 'mod': [2133]}}}, {'path': 'doc/source/whatsnew/v0.17.0.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [40, 55, 63]}}}, {'path': 'pandas/core/frame.py', 'status': 'modified', 'Loc': {"('DataFrame', 'to_excel', 1194)": {'add': [1248]}}}, {'path': 'pandas/io/excel.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11], 'mod': [16]}, "('ExcelFile', '_parse_excel', 322)": {'add': [420]}, "(None, '_conv_value', 467)": {'add': [476]}, "('ExcelWriter', None, 482)": {'add': [499]}, "('_XlwtWriter', '__init__', 1159)": {'add': [1162]}, "('_XlsxWriter', 'write_cells', 1300)": {'add': [1313], 'mod': [1339]}, "('ExcelWriter', '__new__', 522)": {'mod': [524, 526]}, "('ExcelWriter', '__init__', 574)": {'mod': [577]}}}, {'path': 'pandas/io/tests/test_excel.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [522, 1220], 'mod': [3]}, "('ExcelReaderTests', 'test_creating_and_reading_multiple_sheets', 455)": {'mod': [474]}}}, {'path': 'vb_suite/packers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 208]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/frame.py",
"vb_suite/packers.py",
"pandas/io/excel.py"
],
"doc": [
"doc/source/install.rst",
"doc/source/io.rst",
"doc/source/whatsnew/v0.17.0.txt"
],
"test": [
"pandas/io/tests/test_excel.py"
],
"config": [
"ci/requirements-3.4.txt",
"ci/requirements-3.4_SLOW.txt"
],
"asset": []
} | 1 |
langflow-ai | langflow | 395c2d7372dffcf1d4f9577f623a2966183595d9 | https://github.com/langflow-ai/langflow/issues/2126 | bug | Error in the Code Export: Boolean values are in the incorrect syntax. 'false' should be changed to 'False', 'true' should be changed to 'True'. | Error in the Code Export: Boolean values are in the incorrect syntax. 'false' should be changed to 'False', 'true' should be changed to 'True'.
**To Reproduce**
Steps to reproduce the behavior:
click to export code, and turn on tweaks
**Screenshots**
<img width="1728" alt="Screenshot 2024-06-10 at 1 42 59 PM" src="https://github.com/langflow-ai/langflow/assets/15969583/46fd4e9c-eef4-4b68-8ca1-bc870f2983c4">
| null | https://github.com/langflow-ai/langflow/pull/2130 | null | {'base_commit': '395c2d7372dffcf1d4f9577f623a2966183595d9', 'files': [{'path': 'src/frontend/src/modals/apiModal/utils/get-python-api-code.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14], 'mod': [37]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/frontend/src/modals/apiModal/utils/get-python-api-code.tsx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 2cf3d4dbf9da66cbff30f54a032b9c60d6e6073c | https://github.com/3b1b/manim/issues/401 | The video doesn't concatenate, I can only get partial videos | I have only the partial videos with the next error:
"[concat @ 000001ff22102900] Impossible to open '0.mp4'
media\videos\example_scenes\480p15\partial_movie_files\WriteStuff\partial_movie_file_list.txt: No such file or directory
File ready at media\videos\example_scenes\480p15\WriteStuff.mp4"
But I don't have the video WriteStuff.mp4.
Please help me | null | https://github.com/3b1b/manim/pull/402 | null | {'base_commit': '2cf3d4dbf9da66cbff30f54a032b9c60d6e6073c', 'files': [{'path': 'manimlib/scene/scene.py', 'status': 'modified', 'Loc': {"('Scene', 'combine_movie_files', 758)": {'add': [782, 799], 'mod': [798]}}}, {'path': 'manimlib/utils/output_directory_getters.py', 'status': 'modified', 'Loc': {"(None, 'guarantee_existance', 15)": {'mod': [18]}, "(None, 'get_sorted_integer_files', 53)": {'mod': [81]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"manimlib/scene/scene.py",
"manimlib/utils/output_directory_getters.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
Textualize | rich | 2ee992b17ef5ff3c34f89545b0d57ad4690a64fc | https://github.com/Textualize/rich/issues/2422 | Needs triage | [BUG] Databricks is not identified as Jupyter | You may find a solution to your problem in the [docs](https://rich.readthedocs.io/en/latest/introduction.html) or [issues](https://github.com/willmcgugan/rich/issues).
**Describe the bug**
Databricks is not considered as "Jupyter", therefore `JUPYTER_LINES` and `JUPYTER_COLUMNS` has no effect on the console log
Provide a minimal code example that demonstrates the issue if you can. If the issue is visual in nature, consider posting a screenshot.
Databricks has a Ipython type `InteractiveShell` which is neither `Ipython` or `ZMQInteractiveShell`

```python
def _is_jupyter() -> bool: # pragma: no cover
"""Check if we're running in a Jupyter notebook."""
try:
get_ipython # type: ignore[name-defined]
except NameError:
return False
ipython = get_ipython() # type: ignore[name-defined]
shell = ipython.__class__.__name__
if "google.colab" in str(ipython.__class__) or shell == "ZMQInteractiveShell":
return True # Jupyter notebook or qtconsole
elif shell == "TerminalInteractiveShell":
return False # Terminal running IPython
else:
return False # Other type (?)
```
If you're using Rich in a terminal:
```
python -m rich.diagnose
pip freeze | grep rich
```
If you're using Rich in a Jupyter Notebook, run the following snippet in a cell
and paste the output in your bug report.
```python
from rich.diagnose import report
report()
```
```
╭────────────────────── <class 'rich.console.Console'> ───────────────────────╮
│ A high level console interface. │
│ │
│ ╭─────────────────────────────────────────────────────────────────────────╮ │
│ │ <console width=80 None> │ │
│ ╰─────────────────────────────────────────────────────────────────────────╯ │
│ │
│ color_system = None │
│ encoding = 'utf-8' │
│ file = <PythonShellImpl.ConsoleBuffer object at 0x7f462b809970> │
│ height = 25 │
│ is_alt_screen = False │
│ is_dumb_terminal = False │
│ is_interactive = False │
│ is_jupyter = False │
│ is_terminal = False │
│ legacy_windows = False │
│ no_color = False │
│ options = ConsoleOptions( │
│ size=ConsoleDimensions(width=80, height=25), │
│ legacy_windows=False, │
│ min_width=1, │
│ max_width=80, │
│ is_terminal=False, │
│ encoding='utf-8', │
│ max_height=25, │
│ justify=None, │
│ overflow=None, │
│ no_wrap=False, │
│ highlight=None, │
│ markup=None, │
│ height=None │
│ ) │
│ quiet = False │
│ record = False │
│ safe_box = True │
│ size = ConsoleDimensions(width=80, height=25) │
│ soft_wrap = False │
│ stderr = False │
│ style = None │
│ tab_size = 8 │
│ width = 80 │
╰─────────────────────────────────────────────────────────────────────────────╯
╭─── <class 'rich._windows.WindowsConsoleFeatures'> ────╮
│ Windows features available. │
│ │
│ ╭───────────────────────────────────────────────────╮ │
│ │ WindowsConsoleFeatures(vt=False, truecolor=False) │ │
│ ╰───────────────────────────────────────────────────╯ │
│ │
│ truecolor = False │
│ vt = False │
╰───────────────────────────────────────────────────────╯
╭────── Environment Variables ───────╮
│ { │
│ 'TERM': 'unknown', │
│ 'COLORTERM': None, │
│ 'CLICOLOR': None, │
│ 'NO_COLOR': None, │
│ 'TERM_PROGRAM': None, │
│ 'COLUMNS': None, │
│ 'LINES': None, │
│ 'JUPYTER_COLUMNS': '200', │
│ 'JUPYTER_LINES': '50', │
│ 'JPY_PARENT_PID': None, │
│ 'VSCODE_VERBOSE_LOGGING': None │
│ } │
╰────────────────────────────────────╯
platform="Linux"
```
</details>
| null | https://github.com/Textualize/rich/pull/2424 | null | {'base_commit': '2ee992b17ef5ff3c34f89545b0d57ad4690a64fc', 'files': [{'path': 'CHANGELOG.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}}}, {'path': 'CONTRIBUTORS.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16]}}}, {'path': 'rich/console.py', 'status': 'modified', 'Loc': {"(None, '_is_jupyter', 511)": {'mod': [519]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"rich/console.py"
],
"doc": [
"CONTRIBUTORS.md",
"CHANGELOG.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 65d7a9b9902ad85f27b17d759bd13b59c2afc474 | https://github.com/AntonOsika/gpt-engineer/issues/590 | Please update README.md | I recently tried using it by following the steps in the README.md file and it does not work, please update the file.
I keep getting this error when i try to export/set the API key
openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored
in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://platform.openai.com/account/api-keys for details. | null | https://github.com/AntonOsika/gpt-engineer/pull/592 | null | {'base_commit': '65d7a9b9902ad85f27b17d759bd13b59c2afc474', 'files': [{'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'load_env_if_needed', 19)": {'add': [21]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"gpt_engineer/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
langflow-ai | langflow | 2b6f70fdb4f0238b2cf6afdb6473a764e090060f | https://github.com/langflow-ai/langflow/issues/226 | Cannot import name 'BaseLanguageModel' from 'langchain.schema' | **Describe the bug**
A clear and concise description of what the bug is.
**Browser and Version**
- N/A
- macOS 13.3.1 (22E261)
**To Reproduce**
Steps to reproduce the behavior:
1. Install miniconda with Python 3.10.10
2. Install langflow
3. Run langflow
4. See error:
ImportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (/Users/user/miniconda3/lib/python3.10/site-packages/langchain/schema.py)
| null | https://github.com/langflow-ai/langflow/pull/229 | null | {'base_commit': '2b6f70fdb4f0238b2cf6afdb6473a764e090060f', 'files': [{'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [706, 712, 713, 714, 715, 716, 717, 718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 751, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 1711, 1717, 1718, 3955, 3961, 3962, 3963, 3964, 3965, 3966, 3967, 3968, 3969, 3970, 3971, 3972, 3973, 3974, 3975, 3976, 3977, 3978, 3979, 3980, 3981, 3982, 3983, 3984, 3985, 3986, 3987, 3988, 3989, 3990, 3991, 3992, 3993, 3994, 3995, 3996, 3997, 3998, 3999, 4000, 4001, 4499, 4505, 4506]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}}}, {'path': 'src/backend/langflow/interface/agents/custom.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [31]}}}, {'path': 'src/backend/langflow/interface/agents/prebuilt.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}, {'path': 'src/backend/langflow/interface/tools/util.py', 'status': 'modified', 'Loc': {"(None, 'get_func_tool_params', 8)": {'mod': [22, 24, 25, 26]}}}, {'path': 'src/backend/langflow/interface/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}, {'path': 'src/backend/langflow/template/nodes.py', 'status': 'modified', 'Loc': {"('ChainFrontendNode', 'format_field', 536)": {'add': [561]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/backend/langflow/interface/agents/custom.py",
"src/backend/langflow/interface/utils.py",
"src/backend/langflow/template/nodes.py",
"src/backend/langflow/interface/tools/util.py",
"src/backend/langflow/interface/agents/prebuilt.py"
],
"doc": [],
"test": [],
"config": [
"pyproject.toml",
"poetry.lock"
],
"asset": []
} | 1 | |
All-Hands-AI | OpenHands | a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6 | https://github.com/All-Hands-AI/OpenHands/issues/834 | bug | Old node modules need cleared out (Cannot read properties of null (reading 'edgesOut') | <!-- You MUST fill out this template. We will close issues that don't include enough information to reproduce -->
#### Describe the bug
trying to run make build on the latest code and it ends up in this error:
Cannot read properties of null (reading 'edgesOut')
#### Setup and configuration
**Current version**:
<!-- run `git log -n 1` to see this -->
```
commit 229fa988c575c291cff6ffc1f9d15814d9d2a884 (HEAD -> main, origin/main, origin/HEAD)
Author: Xingyao Wang <xingyao6@illinois.edu>
Date: Sun Apr 7 01:04:17 2024 +0800
remove seed=42 to fix #813 (#830)
```
<!-- tell us everything about your environment -->
**My config.toml and environment vars** (be sure to redact API keys):
```
LLM_API_KEY="ollama"
LLM_MODEL="ollama/dolphin-mixtral:latest"
LLM_EMBEDDING_MODEL="local"
LLM_BASE_URL="http://localhost:11434"
WORKSPACE_DIR="./workspace"
```
**My model and agent** (you can see these settings in the UI):
* Model:
* Agent:
**Commands I ran to install and run OpenDevin**:
```
make build
```
**Steps to Reproduce**:
1. pull latest code
2. make build
3.
**Logs, error messages, and screenshots**:
```
142 http fetch GET 200 https://registry.npmjs.org/@swc%2fcore 6ms (cache hit)
143 silly fetch manifest @swc/helpers@^0.5.0
144 http fetch GET 200 https://registry.npmjs.org/@swc%2fhelpers 2ms (cache hit)
145 silly fetch manifest postcss@^8.4.12
146 http fetch GET 200 https://registry.npmjs.org/postcss 6ms (cache hit)
147 silly fetch manifest typescript@>=4.1.0
148 http fetch GET 200 https://registry.npmjs.org/typescript 50ms (cache hit)
149 silly fetch manifest typescript@^4.9.5
150 silly fetch manifest vitest@^0.29.2
151 silly fetch manifest @vitest/browser@*
152 silly fetch manifest vitest@1.4.0
153 silly fetch manifest @types/node@^18.0.0 || >=20.0.0
154 timing idealTree Completed in 4380ms
155 timing command:install Completed in 4385ms
156 verbose stack TypeError: Cannot read properties of null (reading 'edgesOut')
156 verbose stack at #loadPeerSet (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:1313:38)
156 verbose stack at async #buildDepStep (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:924:11)
156 verbose stack at async Arborist.buildIdealTree (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/build-ideal-tree.js:203:7)
156 verbose stack at async Promise.all (index 1)
156 verbose stack at async Arborist.reify (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/node_modules/@npmcli/arborist/lib/arborist/reify.js:154:5)
156 verbose stack at async Install.exec (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/lib/commands/install.js:153:5)
156 verbose stack at async module.exports (/home/atlas/.nvm/versions/node/v18.20.1/lib/node_modules/npm/lib/cli-entry.js:61:5)
157 verbose cwd /home/atlas/OpenDevin/frontend
158 verbose Linux 6.6.4-060604-generic
159 verbose node v18.20.1
160 verbose npm v10.5.0
161 error Cannot read properties of null (reading 'edgesOut')
162 verbose exit 1
163 timing npm Completed in 4511ms
164 verbose unfinished npm timer reify 1712423688807
165 verbose unfinished npm timer reify:loadTrees 1712423688810
166 verbose unfinished npm timer idealTree:buildDeps 1712423691257
167 verbose unfinished npm timer idealTree:node_modules/.pnpm/@monaco-editor+react@4.6.0_monaco-editor@0.47.0_react-dom@18.2.0_react@18.2.0/node_modules/@monaco-editor/react 1712423692071
168 verbose code 1
169 error A complete log of this run can be found in: /home/atlas/.npm/_logs/2024-04-06T17_14_48_682Z-debug-0.log
```
#### Additional Context
| null | https://github.com/All-Hands-AI/OpenHands/pull/867 | null | {'base_commit': 'a0c5c8efe9cd85d19aef9e98d72345e3ae81f1b6', 'files': [{'path': 'opendevin/logging.py', 'status': 'modified', 'Loc': {"(None, 'get_llm_prompt_file_handler', 118)": {'mod': [123]}, "(None, 'get_llm_response_file_handler', 128)": {'mod': [133]}, '(None, None, None)': {'mod': [139, 144]}}}]} | [] | [
"frontend/node_modules"
] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"opendevin/logging.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"frontend/node_modules"
]
} | null |
localstack | localstack | 2fe8440b619329891db150e45910e8aaad97b7ce | https://github.com/localstack/localstack/issues/4987 | type: bug
status: triage needed
aws:s3 | bug: The Content-MD5 you specified did not match what we received | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I started getting the following exception
```
com.amazonaws.services.s3.model.AmazonS3Exception: The Content-MD5 you specified did not match what we received.
(Service: Amazon S3; Status Code: 400; Error Code: BadDigest; Request ID: null; S3 Extended Request ID: null; Proxy: null)
```
after upgrade to `localstack/localstack-light:latest`, reverting back to `localstack/localstack-light:0.13.0` fixes it for me.
### Expected Behavior
No exception.
### How are you starting LocalStack?
Custom (please describe below)
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
Using https://www.testcontainers.org/ to start the test.
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
@Bean
public AmazonS3 createAmazonS3() {
final DockerImageName diName = DockerImageName.parse("localstack/localstack-light:latest").asCompatibleSubstituteFor("localstack/localstack");
final LocalStackContainer localstack = new LocalStackContainer(diName)
.withServices(S3);
localstack.addEnv("AWS_ACCESS_KEY", "test");
localstack.addEnv("AWS_SECRET_ACCESS_KEY", "567");
localstack.addEnv("AWS_REGION", "us-east-1");
localstack.addEnv("LS_LOG", "trace");
localstack.start();
return AmazonS3ClientBuilder
.standard()
.withEndpointConfiguration(localstack.getEndpointConfiguration(S3))
.withCredentials(localstack.getDefaultCredentialsProvider())
.build();
}
```
then calling `store` on `org.springframework.core.io.Resource` which is `SimpleStorageResource`.
### Environment
```markdown
- OS: macOS Catalina 10.15.7
- LocalStack: latest
```
### Anything else?
`LS_LOG=trace` with `localstack/localstack-light:0.13.0`
```
2021-11-22T19:12:03:DEBUG:localstack.services.edge: IN(s3): "GET /test-bucket-name/test-runtime.properties" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52476', 'Amz-Sdk-Invocation-Id': '307eaac4-b1b6-d23e-96b1-a6dcff7d5414', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=72f59f88e302656e9e4c77308f1de7925f5b63aec3efec93dd9d5f32ae6a2b6d', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'X-Amz-Date': '20211122T191203Z', 'Content-Length': '0', 'Connection': 'Keep-Alive', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52476', 'x-localstack-edge': 'http://127.0.0.1:52476'} - data: b''
2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): "GET /test-bucket-name/test-runtime.properties" - status: 404 - response headers: {'x-amzn-requestid': 'UJFL1535CHVAFPN2JLH2ACBUQX026PCCCTNN0RSBF4PJHULNR1AR', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'x-amz-request-id': '3DAD4B54E96B3CA1', 'x-amz-id-2': 'MzRISOwyjmnup3DAD4B54E96B3CA17/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'<?xml version="1.0" encoding="UTF-8"?>\n<Error>\n <Code>NoSuchKey</Code>\n <Message>The specified key does not exist.</Message>\n \n <RequestID>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestID>\n</Error>'
2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): "GET /test-bucket-name/test-runtime.properties" - status: 404 - response headers: {'x-amzn-requestid': 'UJFL1535CHVAFPN2JLH2ACBUQX026PCCCTNN0RSBF4PJHULNR1AR', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'x-amz-request-id': '3DAD4B54E96B3CA1', 'x-amz-id-2': 'MzRISOwyjmnup3DAD4B54E96B3CA17/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'<?xml version="1.0" encoding="UTF-8"?>\n<Error>\n <Code>NoSuchKey</Code>\n <Message>The specified key does not exist.</Message>\n \n <RequestID>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestID>\n</Error>'
2021-11-22T19:12:03:DEBUG:localstack.services.edge: IN(s3): "PUT /test-bucket-name/test-runtime.properties" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52476', 'Amz-Sdk-Invocation-Id': '8a6682d3-1481-f538-4ed4-4ac03c4e4ec3', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=282e9062c19a5a575d49902c3c642928039a210c8d5eb54de069655f10ef94ea', 'Content-Md5': 'pX8KKuGXS1f2VTcuJpqjkw==', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD', 'X-Amz-Date': '20211122T191203Z', 'X-Amz-Decoded-Content-Length': '147', 'Content-Length': '320', 'Connection': 'Keep-Alive', 'Expect': '100-continue', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52476', 'x-localstack-edge': 'http://127.0.0.1:52476'} - data: b'93;chunk-signature=68bf4c0366a3d4c963efb7eaf3426c439ac26f9ca077b6c71e1bd60de69f0259\r\n#20211122+0100\n#Mon Nov 22 20:12:03 CET 2021\nlast.sync.url.test-space-key=2822a50f-4992-425a-b8fb-923735a9ddff317e3479-5907-46cf-b33a-60da9709274f\n\r\n0;chunk-signature=bf3a6ecc9d3913d2ad6618d420c1db6abefb4f452469693ffc5bbd038ad2f2f0\r\n\r\n'
2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): "PUT /test-bucket-name/test-runtime.properties" - status: 200 - response headers: {'ETag': '"a57f0a2ae1974b57f655372e269aa393"', 'last-modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Content-Length': '0', 'x-amzn-requestid': '1EYVT7AJ5TJ3JH1SK3ZVTHBBB860EIC4FTOP9VPHCSHR967AFFAP', 'Content-Type': 'text/html; charset=utf-8', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Location': '/test-bucket-name', 'x-amz-request-id': '5BC855D1EAAEFD00', 'x-amz-id-2': 'MzRISOwyjmnup5BC855D1EAAEFD007/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp'} - response: b''
2021-11-22T19:12:03:DEBUG:localstack.services.edge: OUT(s3): "PUT /test-bucket-name/test-runtime.properties" - status: 200 - response headers: {'ETag': '"a57f0a2ae1974b57f655372e269aa393"', 'last-modified': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Content-Length': '0', 'x-amzn-requestid': '1EYVT7AJ5TJ3JH1SK3ZVTHBBB860EIC4FTOP9VPHCSHR967AFFAP', 'Content-Type': 'text/html; charset=utf-8', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:12:03 GMT', 'Location': '/test-bucket-name', 'x-amz-request-id': '5BC855D1EAAEFD00', 'x-amz-id-2': 'MzRISOwyjmnup5BC855D1EAAEFD007/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp'} - response: b''
```
----
`LS_LOG=trace` with `localstack/localstack-light:latest`
```
2021-11-22T19:10:42.097:DEBUG:localstack.services.edge: IN(s3): "GET /test-bucket-name/test-runtime.properties" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52438', 'Amz-Sdk-Invocation-Id': '3f452c53-2a97-15f7-8f44-96c3b3d4aa27', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-type;host;user-agent;x-amz-content-sha256;x-amz-date, Signature=a8c7d475d338c92c01eca9638e858e8f0e84ae73498435a55520ee04ff655476', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855', 'X-Amz-Date': '20211122T191042Z', 'Content-Length': '0', 'Connection': 'Keep-Alive', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52438', 'x-localstack-edge': 'http://127.0.0.1:52438'} - data: b''
2021-11-22T19:10:42.118:DEBUG:localstack.services.edge: OUT(s3): "GET /test-bucket-name/test-runtime.properties" - status: 404 - response headers: {'x-amzn-requestid': 'RMJVBYKAH478ETR8T1G9DQ4TUHEIKKB96892NRKM3PYQYRVUPI8M', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:10:42 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:42 GMT', 'x-amz-request-id': '7D83EFCB204B6EC9', 'x-amz-id-2': 'MzRISOwyjmnup7D83EFCB204B6EC97/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'<?xml version="1.0" encoding="UTF-8"?>\n<Error>\n <Code>NoSuchKey</Code>\n <Message>The specified key does not exist.</Message>\n \n <RequestID>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestID>\n</Error>'
2021-11-22T19:10:42.119:DEBUG:localstack.services.edge: OUT(s3): "GET /test-bucket-name/test-runtime.properties" - status: 404 - response headers: {'x-amzn-requestid': 'RMJVBYKAH478ETR8T1G9DQ4TUHEIKKB96892NRKM3PYQYRVUPI8M', 'Content-Type': 'application/xml; charset=utf-8', 'Content-Length': '207', 'Access-Control-Allow-Origin': '*', 'Server': 'Werkzeug/2.0.2 Python/3.8.12', 'Date': 'Mon, 22 Nov 2021 19:10:42 GMT', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:42 GMT', 'x-amz-request-id': '7D83EFCB204B6EC9', 'x-amz-id-2': 'MzRISOwyjmnup7D83EFCB204B6EC97/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'accept-ranges': 'bytes', 'content-language': 'en-US'} - response: b'<?xml version="1.0" encoding="UTF-8"?>\n<Error>\n <Code>NoSuchKey</Code>\n <Message>The specified key does not exist.</Message>\n \n <RequestID>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestID>\n</Error>'
2021-11-22T19:10:45.164:DEBUG:localstack.services.edge: IN(s3): "PUT /test-bucket-name/test-runtime.properties" - headers: {'Remote-Addr': '172.17.0.1', 'Host': '127.0.0.1:52438', 'Amz-Sdk-Invocation-Id': '3446d18f-08a6-2432-a4dc-f79846c9655e', 'Amz-Sdk-Request': 'attempt=1;max=4', 'Amz-Sdk-Retry': '0/0/500', 'Authorization': 'AWS4-HMAC-SHA256 Credential=accesskey/20211122/us-east-1/s3/aws4_request, SignedHeaders=amz-sdk-invocation-id;amz-sdk-request;amz-sdk-retry;content-length;content-md5;content-type;host;user-agent;x-amz-content-sha256;x-amz-date;x-amz-decoded-content-length, Signature=56f95a44e31918932bc863893064a1fcafbf4066d44bc44c8d078cf420316011', 'Content-Md5': 'Xi4HEV9K00jfK4+6lHxpDA==', 'Content-Type': 'application/octet-stream', 'User-Agent': 'aws-sdk-java/1.11.951 Mac_OS_X/10.15.7 OpenJDK_64-Bit_Server_VM/11.0.11+9-LTS java/11.0.11 scala/2.13.6 kotlin/1.5.31 vendor/Amazon.com_Inc.', 'X-Amz-Content-Sha256': 'STREAMING-AWS4-HMAC-SHA256-PAYLOAD', 'X-Amz-Date': '20211122T191045Z', 'X-Amz-Decoded-Content-Length': '147', 'Content-Length': '320', 'Connection': 'Keep-Alive', 'Expect': '100-continue', 'X-Forwarded-For': '172.17.0.1, 127.0.0.1:52438', 'x-localstack-edge': 'http://127.0.0.1:52438'} - data: b'93;chunk-signature=5be6b2d473e96bb9f297444da60bdf0ff8f5d2e211e1d551b3cf3646c0946641\r\n#20211122+0100\n#Mon Nov 22 20:10:44 CET 2021\nlast.sync.url.test-space-key=2822a50f-4992-425a-b8fb-923735a9ddff317e3479-5907-46cf-b33a-60da9709274f\n\r\n0;chunk-signature=bd5c830b94346b57ddc8805ba26c44a122256c207014433bf6579b0985f21df7\r\n\r\n'
2021-11-22T19:10:45.167:DEBUG:localstack.services.edge: OUT(s3): "PUT /test-bucket-name/test-runtime.properties" - status: 400 - response headers: {'Content-Type': 'application/xml', 'Location': '/test-bucket-name', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:45 GMT', 'x-amz-request-id': '20278550A22502FB', 'x-amz-id-2': 'MzRISOwyjmnup20278550A22502FB7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'Content-Length': '156'} - response: <?xml version="1.0" encoding="utf-8"?>
<Error><Code>BadDigest</Code><Message>The Content-MD5 you specified did not match what we received.</Message></Error>
2021-11-22T19:10:45.168:DEBUG:localstack.services.edge: OUT(s3): "PUT /test-bucket-name/test-runtime.properties" - status: 400 - response headers: {'Content-Type': 'application/xml', 'Location': '/test-bucket-name', 'Last-Modified': 'Mon, 22 Nov 2021 19:10:45 GMT', 'x-amz-request-id': '20278550A22502FB', 'x-amz-id-2': 'MzRISOwyjmnup20278550A22502FB7/JypPGXLh0OVFGcJaaO3KW/hRAqKOpIEEp', 'Content-Length': '156'} - response: <?xml version="1.0" encoding="utf-8"?>
<Error><Code>BadDigest</Code><Message>The Content-MD5 you specified did not match what we received.</Message></Error>
``` | null | https://github.com/localstack/localstack/pull/5001 | null | {'base_commit': '2fe8440b619329891db150e45910e8aaad97b7ce', 'files': [{'path': 'localstack/services/s3/s3_listener.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 883], 'mod': [61, 62]}, "(None, 'check_content_md5', 884)": {'add': [884]}}}, {'path': 'tests/integration/test_s3.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 2, 51]}, "(None, 'test_cors_with_allowed_origins', 2662)": {'add': [2779]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/services/s3/s3_listener.py"
],
"doc": [],
"test": [
"tests/integration/test_s3.py"
],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 8c9d9b0475247f667a0f184f2fbc6d66b955749f | https://github.com/localstack/localstack/issues/11696 | type: bug
status: resolved/fixed
aws:apigateway | bug: API Gateway does not persist correctly when you restart the localstack docker container | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
I have a working api gateway created with localstack. When I restart the container and try to query the same url, I get this message:
`{"message": "The API id '0e0cf92f' does not correspond to a deployed API Gateway API"}`.
# Details:
First I create my API and confirm it works:
```
$ awslocal apigatewayv2 get-apis
{
"Items": [
{
"ApiEndpoint": "http://0e0cf92f.execute-api.localhost.localstack.cloud:4566",
"ApiId": "0e0cf92f",
"ApiKeySelectionExpression": "$request.header.x-api-key",
"CorsConfiguration": {
"AllowHeaders": [
"*"
],
"AllowMethods": [
"*"
],
"AllowOrigins": [
"*"
],
"ExposeHeaders": [
"*"
]
},
"CreatedDate": "2024-10-16T05:24:49.452000+00:00",
"DisableExecuteApiEndpoint": false,
"Name": "XpedigoAPI_v2",
"ProtocolType": "HTTP",
"RouteSelectionExpression": "$request.method $request.path",
"Tags": {},
"Version": "2024-09-25 01:18:37UTC"
}
]
}
```
```
$ awslocal apigatewayv2 get-stages --api-id=0e0cf92f
{
"Items": [
{
"CreatedDate": "2024-10-16T05:24:49.524619+00:00",
"DefaultRouteSettings": {
"DetailedMetricsEnabled": false
},
"DeploymentId": "4d3d207f",
"LastUpdatedDate": "2024-10-16T05:24:49.524619+00:00",
"RouteSettings": {},
"StageName": "local",
"StageVariables": {
"baseurl": "alb-localstack-bdowson.ngrok.io",
"env": "local"
},
"Tags": {}
}
]
}
```
```
$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f
{
"Items": [
{
"AutoDeployed": false,
"CreatedDate": "2024-10-16T05:24:49.529068+00:00",
"DeploymentId": "4d3d207f",
"DeploymentStatus": "DEPLOYED"
}
]
}
```
Confirm it works:
```
$ curl -v https://0e0cf92f.execute-api.localhost.localstack.cloud:4566/local/accounts/health
* Trying 127.0.0.1:4566...
* TCP_NODELAY set
* Connected to 0e0cf92f.execute-api.localhost.localstack.cloud (127.0.0.1) port 4566 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=localhost.localstack.cloud
* start date: Sep 6 00:00:00 2024 GMT
* expire date: Dec 5 23:59:59 2024 GMT
* subjectAltName: host "0e0cf92f.execute-api.localhost.localstack.cloud" matched cert's "*.execute-api.localhost.localstack.cloud"
* issuer: C=AT; O=ZeroSSL; CN=ZeroSSL RSA Domain Secure Site CA
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x5b8d78082650)
> GET /local/accounts/health HTTP/2
> Host: 0e0cf92f.execute-api.localhost.localstack.cloud:4566
> user-agent: curl/7.68.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 200
< server: TwistedWeb/24.3.0
< date: Wed, 16 Oct 2024 05:25:16 GMT
< content-type: text/html; charset=UTF-8
< cache-control: private, must-revalidate
< expires: -1
< pragma: no-cache
< x-powered-by: PHP/8.1.9RC1
< content-length: 2
< apigw-requestid: 5f9a3aa7
<
* Connection #0 to host 0e0cf92f.execute-api.localhost.localstack.cloud left intact
OK
```
Now I stop localstack, and restart it with `docker-compose up`. The api gateway no longer works correctly:
```
$ curl -v https://0e0cf92f.execute-api.localhost.localstack.cloud:4566/local/accounts/health
* Trying 127.0.0.1:4566...
* TCP_NODELAY set
* Connected to 0e0cf92f.execute-api.localhost.localstack.cloud (127.0.0.1) port 4566 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* successfully set certificate verify locations:
* CAfile: /etc/ssl/certs/ca-certificates.crt
CApath: /etc/ssl/certs
* TLSv1.3 (OUT), TLS handshake, Client hello (1):
* TLSv1.3 (IN), TLS handshake, Server hello (2):
* TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8):
* TLSv1.3 (IN), TLS handshake, Certificate (11):
* TLSv1.3 (IN), TLS handshake, CERT verify (15):
* TLSv1.3 (IN), TLS handshake, Finished (20):
* TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1):
* TLSv1.3 (OUT), TLS handshake, Finished (20):
* SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384
* ALPN, server accepted to use h2
* Server certificate:
* subject: CN=localhost.localstack.cloud
* start date: Sep 6 00:00:00 2024 GMT
* expire date: Dec 5 23:59:59 2024 GMT
* subjectAltName: host "0e0cf92f.execute-api.localhost.localstack.cloud" matched cert's "*.execute-api.localhost.localstack.cloud"
* issuer: C=AT; O=ZeroSSL; CN=ZeroSSL RSA Domain Secure Site CA
* SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x6550ac6c5650)
> GET /local/accounts/health HTTP/2
> Host: 0e0cf92f.execute-api.localhost.localstack.cloud:4566
> user-agent: curl/7.68.0
> accept: */*
>
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* TLSv1.3 (IN), TLS handshake, Newsession Ticket (4):
* old SSL session ID is stale, removing
* Connection state changed (MAX_CONCURRENT_STREAMS == 100)!
< HTTP/2 404
< server: TwistedWeb/24.3.0
< date: Wed, 16 Oct 2024 05:29:09 GMT
< content-type: application/json
< content-length: 86
<
* Connection #0 to host 0e0cf92f.execute-api.localhost.localstack.cloud left intact
{"message": "The API id '0e0cf92f' does not correspond to a deployed API Gateway API"}
```
But the configurations are all the same as before:
```
$ awslocal apigatewayv2 get-apis
{
"Items": [
{
"ApiEndpoint": "http://0e0cf92f.execute-api.localhost.localstack.cloud:4566",
"ApiId": "0e0cf92f",
"ApiKeySelectionExpression": "$request.header.x-api-key",
"CorsConfiguration": {
"AllowHeaders": [
"*"
],
"AllowMethods": [
"*"
],
"AllowOrigins": [
"*"
],
"ExposeHeaders": [
"*"
]
},
"CreatedDate": "2024-10-16T05:24:49.452000+00:00",
"DisableExecuteApiEndpoint": false,
"Name": "XpedigoAPI_v2",
"ProtocolType": "HTTP",
"RouteSelectionExpression": "$request.method $request.path",
"Tags": {},
"Version": "2024-09-25 01:18:37UTC"
}
]
}
$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f
{
"Items": [
{
"AutoDeployed": false,
"CreatedDate": "2024-10-16T05:24:49.529068+00:00",
"DeploymentId": "4d3d207f",
"DeploymentStatus": "DEPLOYED"
}
]
}
$ awslocal apigatewayv2 get-deployments --api-id=0e0cf92f
{
"Items": [
{
"CreatedDate": "2024-10-16T05:24:49.524619+00:00",
"DefaultRouteSettings": {
"DetailedMetricsEnabled": false
},
"DeploymentId": "4d3d207f",
"LastUpdatedDate": "2024-10-16T05:24:49.524619+00:00",
"RouteSettings": {},
"StageName": "local",
"StageVariables": {
"baseurl": "alb-localstack-bdowson.ngrok.io",
"env": "local"
},
"Tags": {}
}
]
}
```
### Expected Behavior
API gateway should work correctly even after a localstack container restart.
### How are you starting LocalStack?
With a docker-compose file
### Steps To Reproduce
docker-compose.yml:
```
localstack:
container_name: localstack
image: localstack/localstack-pro:latest
ports:
- 4566:4566
- 4510-4559:4510-4559
environment:
- DOCKER_HOST=unix:///var/run/docker.sock
- DEBUG=1
- PERSISTENCE=1
- SNAPSHOT_LOAD_STRATEGY=ON_STARTUP
- LOCALSTACK_API_KEY=${LOCALSTACK_API_KEY}
- PROVIDER_OVERRIDE_APIGATEWAY=next_gen
networks:
app_network:
ipv4_address: 10.0.2.20
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/localstack-data:/var/lib/localstack"
```
1. `docker-compose up localstack`
2. Import API Gateway with `awslocal apigatewayv2 import-api --body file://t.json`
3. Create stage with `awslocal apigatewayv2 create-stage --api-id 54ae753d --stage-name local --auto-deploy`
4. Confirm it works with `curl -v https://[gateway url]/local/whatever`
5. Stop localstack
6. Run `docker-compose up localstack` again
7. Try and curl the api again and you will get an error
### Environment
```markdown
- OS: Ubuntu 20.04.5 LTS
- LocalStack:
LocalStack version: 3.8.2.dev33
LocalStack Docker image sha: localstack/localstack-pro@sha256:b533e1bcfbe8f5462483725276a0e7f8fbd9ded32b1be2dac5ec9cee5e822023
LocalStack build date: 2024-10-15
LocalStack build git hash: 318e1adc
```
### Anything else?
After this error appears, even if I delete the API and recreate it I still get the message `{"message": "The API id 'xxxx' does not correspond to a deployed API Gateway API"}`. The only way for me to resolve it is to delete my local locastack snapshot folder and rebuild everything. | null | https://github.com/localstack/localstack/pull/11702 | null | {'base_commit': '8c9d9b0475247f667a0f184f2fbc6d66b955749f', 'files': [{'path': 'localstack-core/localstack/services/apigateway/next_gen/execute_api/router.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12]}, "('ApiGatewayEndpoint', None, 34)": {'mod': [41]}, "('ApiGatewayEndpoint', '__init__', 41)": {'mod': [44, 45, 46]}}}, {'path': 'localstack-core/localstack/services/apigateway/next_gen/provider.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [21]}, "('ApigatewayNextGenProvider', '__init__', 46)": {'mod': [50, 51]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack-core/localstack/services/apigateway/next_gen/execute_api/router.py",
"localstack-core/localstack/services/apigateway/next_gen/provider.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | d865e5213515cef6344f16f4c77386be9ce8f223 | https://github.com/pandas-dev/pandas/issues/23814 | Performance
Categorical
good first issue | equality comparison with a scalar is slow for category (performance regression) | Are the following 2 ways to compare a series to a scalar equivalent (ignore missing values)? I have to write the hard way in order to take advantage of the category properties.
```python
x = pd.Series(list('abcd') * 1000000).astype('category')
%timeit x == 'a'
# 10 loops, best of 3: 25.2 ms per loop
%timeit x.cat.codes == x.cat.categories.get_loc('a')
# 1000 loops, best of 3: 750 µs per loop
``` | null | https://github.com/pandas-dev/pandas/pull/23888 | null | {'base_commit': 'd865e5213515cef6344f16f4c77386be9ce8f223', 'files': [{'path': 'asv_bench/benchmarks/categoricals.py', 'status': 'modified', 'Loc': {"('Constructor', 'setup', 33)": {'add': [48]}, '(None, None, None)': {'add': [70]}}}, {'path': 'doc/source/whatsnew/v0.24.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1153]}}}, {'path': 'pandas/core/arrays/categorical.py', 'status': 'modified', 'Loc': {"('Categorical', '__init__', 314)": {'add': [349]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/arrays/categorical.py",
"asv_bench/benchmarks/categoricals.py"
],
"doc": [
"doc/source/whatsnew/v0.24.0.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
geekan | MetaGPT | ef5304961edbc194148bc5fbdb4591d2f27c2cfc | https://github.com/geekan/MetaGPT/issues/795 | Human Engagement不生效 | 
我尝试运行博客站关于Human Engagement的源代码,在运行到
team.hire(
[
SimpleCoder(),
SimpleTester(),
SimpleReviewer(),
SimpleReviewer(is_human=True)
]
)
中的SimpleReviewer(is_human=True) 系统并没有停止进程提供用户输入,而是直接使用了 PROMPT_TEMPLATE: str = """
Context: {context}
Review the test cases and provide one critical comments:
"""
name: str = "SimpleWriteReview"
默认的prompt请求llm | null | https://github.com/geekan/MetaGPT/pull/717 | null | {'base_commit': 'ef5304961edbc194148bc5fbdb4591d2f27c2cfc', 'files': [{'path': 'metagpt/roles/role.py', 'status': 'modified', 'Loc': {"('Role', '__init__', 160)": {'add': [168]}}}, {'path': 'tests/metagpt/roles/test_role.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 14]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "5",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"metagpt/roles/role.py"
],
"doc": [],
"test": [
"tests/metagpt/roles/test_role.py"
],
"config": [],
"asset": []
} | 1 | |
geekan | MetaGPT | f201b2f5f32c2d48eab6632bf103e9b3a92fc999 | https://github.com/geekan/MetaGPT/issues/1213 | RAG Faiss AssertionError | **Environment information**
<!-- Environment:System version (like ubuntu 22.04), Python version (conda python 3.7), LLM type and model (OpenAI gpt-4-1106-preview) -->
- LLM type and model name: ollama ,nomic-embed-text
- System version:win 11
- Python version:3.9
- MetaGPT version or branch:0.8
**Bug description**
<!-- Clearly and directly describe the current bug -->
run code as below
```
import asyncio
from metagpt.rag.engines import SimpleEngine
from metagpt.rag.schema import FAISSRetrieverConfig
from metagpt.const import EXAMPLE_DATA_PATH
DOC_PATH = EXAMPLE_DATA_PATH / "rag/travel.txt"
async def main():
engine = SimpleEngine.from_docs(input_files=[DOC_PATH], retriever_configs=[FAISSRetrieverConfig()])
answer = await engine.aquery("What does Bob like?")
print(answer)
if __name__ == "__main__":
asyncio.run(main())
```
occur AssertionError
```
Traceback (most recent call last):
File "E:\MyTask\Metagpt\MetaGPT-0.8.0\examples\rag_test.py", line 25, in <module>
asyncio.run(main())
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\asyncio\runners.py", line 44, in run
return loop.run_until_complete(main)
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\asyncio\base_events.py", line 649, in run_until_complete
return future.result()
File "E:\MyTask\Metagpt\MetaGPT-0.8.0\examples\rag_test.py", line 15, in main
SimpleEngine.from_docs(input_files=[DOC_PATH], retriever_configs=retriever_configs).persist(persist_dir)
File "e:\mytask\metagpt\metagpt-0.8.0\metagpt\rag\engines\simple.py", line 111, in from_docs
return cls._from_index(index, llm=llm, retriever_configs=retriever_configs, ranker_configs=ranker_configs)
File "e:\mytask\metagpt\metagpt-0.8.0\metagpt\rag\engines\simple.py", line 211, in _from_index
retriever = get_retriever(configs=retriever_configs, index=index) # Default index.as_retriever
File "e:\mytask\metagpt\metagpt-0.8.0\metagpt\rag\factories\retriever.py", line 52, in get_retriever
retrievers = super().get_instances(configs, **kwargs)
File "e:\mytask\metagpt\metagpt-0.8.0\metagpt\rag\factories\base.py", line 18, in get_instances
return [self.get_instance(key, **kwargs) for key in keys]
File "e:\mytask\metagpt\metagpt-0.8.0\metagpt\rag\factories\base.py", line 18, in <listcomp>
return [self.get_instance(key, **kwargs) for key in keys]
File "e:\mytask\metagpt\metagpt-0.8.0\metagpt\rag\factories\base.py", line 45, in get_instance
return creator(key, **kwargs)
File "e:\mytask\metagpt\metagpt-0.8.0\metagpt\rag\factories\retriever.py", line 61, in _create_faiss_retriever
config.index = self._build_index_from_vector_store(config, vector_store, **kwargs)
File "e:\mytask\metagpt\metagpt-0.8.0\metagpt\rag\factories\retriever.py", line 93, in _build_index_from_vector_store
new_index = VectorStoreIndex(
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 74, in __init__
super().__init__(
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\site-packages\llama_index\core\indices\base.py", line 91, in __init__
index_struct = self.build_index_from_nodes(
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 307, in build_index_from_nodes
return self._build_index_from_nodes(nodes, **insert_kwargs)
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 279, in _build_index_from_nodes
self._add_nodes_to_index(
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\site-packages\llama_index\core\indices\vector_store\base.py", line 233, in _add_nodes_to_index
new_ids = self._vector_store.add(nodes_batch, **insert_kwargs)
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\site-packages\llama_index\vector_stores\faiss\base.py", line 121, in add
self._faiss_index.add(text_embedding_np)
File "D:\Dev_Software\Anaconda\envs\metagpt\lib\site-packages\faiss\class_wrappers.py", line 228, in replacement_add
assert d == self.d
AssertionError
```
But when using BM25 instead of Faiss, it runs well.
| null | https://github.com/geekan/MetaGPT/pull/1241 | null | {'base_commit': 'f201b2f5f32c2d48eab6632bf103e9b3a92fc999', 'files': [{'path': 'config/config2.example.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20]}}}, {'path': 'metagpt/configs/embedding_config.py', 'status': 'modified', 'Loc': {"('EmbeddingConfig', None, 16)": {'add': [22, 27, 34, 43]}}}, {'path': 'metagpt/rag/schema.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}, "('FAISSRetrieverConfig', 'check_dimensions', 45)": {'mod': [47]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"metagpt/rag/schema.py",
"metagpt/configs/embedding_config.py"
],
"doc": [],
"test": [],
"config": [
"config/config2.example.yaml"
],
"asset": []
} | 1 | |
scrapy | scrapy | fe7043a648eac1e0ec0af772a21b283566ecd020 | https://github.com/scrapy/scrapy/issues/3903 | enhancement | Can I get remote server's ip address via response? | Can I get remote server's ip address via response?
For some reason. I'll need get remote site's ip address when parsing response. I looked the document but found nothing.
Any one know that?
Thanks! | null | https://github.com/scrapy/scrapy/pull/3940 | null | {'base_commit': 'fe7043a648eac1e0ec0af772a21b283566ecd020', 'files': [{'path': 'conftest.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'docs/topics/request-response.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [618, 707], 'mod': [39]}}}, {'path': 'scrapy/core/downloader/__init__.py', 'status': 'modified', 'Loc': {"('Downloader', '_download', 160)": {'mod': [176]}}}, {'path': 'scrapy/core/downloader/handlers/http11.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "('_ResponseReader', '__init__', 440)": {'add': [451]}, "('_ResponseReader', None, 438)": {'add': [457]}, "('ScrapyAgent', '_cb_bodyready', 373)": {'mod': [376]}, "('ScrapyAgent', '_cb_bodydone', 411)": {'mod': [412, 413, 414, 415, 416, 417]}, "('_ResponseReader', 'connectionLost', 483)": {'mod': [489, 493, 498]}}}, {'path': 'scrapy/http/response/__init__.py', 'status': 'modified', 'Loc': {"('Response', '__init__', 20)": {'add': [27]}, "('Response', None, 18)": {'mod': [20]}, "('Response', 'replace', 86)": {'mod': [90]}}}, {'path': 'tests/mockserver.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 20, 226], 'mod': [9, 10, 13, 14, 16, 17, 241, 242, 243, 244, 245, 247, 248, 249, 250, 251, 252, 253]}, "('MockServer', None, 201)": {'mod': [201]}, "('MockServer', '__enter__', 203)": {'mod': [204, 206]}}}, {'path': 'tests/test_crawl.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "('CrawlTestCase', 'test_response_ssl_certificate_empty_response', 431)": {'add': [438]}}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {"('CrawlerProcessSubprocess', None, 277)": {'add': [287], 'mod': [277, 278]}, "('CrawlerProcessSubprocess', 'test_reactor_asyncio', 331)": {'add': [334]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code\nDoc"
} | {
"code": [
"scrapy/http/response/__init__.py",
"scrapy/core/downloader/handlers/http11.py",
"scrapy/core/downloader/__init__.py",
"tests/mockserver.py",
"conftest.py"
],
"doc": [
"docs/topics/request-response.rst"
],
"test": [
"tests/test_crawler.py",
"tests/test_crawl.py"
],
"config": [],
"asset": []
} | 1 |
psf | requests | 7eaa5ee37f2ef0fb37dc6e9efbead726665810b4 | https://github.com/psf/requests/issues/3659 | URL proxy auth with empty passwords doesn't emit auth header. | I'm using a proxy that requires authentication to send request that receives 302 response with Location header. I would like python.requests to follow this redirect and make request via proxy with specified credentials. But it seems like this doesn't happen, if I provide credentials in HTTPProxyAuth they will work ok for 200 responses but will fail for 302. See below code sample:
```python
import requests
from requests.auth import HTTPProxyAuth
sess = requests.Session()
url1 = 'http://httpbin.org/'
url2 = 'http://httpbin.org/redirect/2'
auth = HTTPProxyAuth('frank', 'hunter2')
proxies = {
"http": "http://localhost:9000"
}
response1 = sess.get(url1, proxies=proxies, auth=auth)
response1.raise_for_status()
response2 = sess.get(url2, proxies=proxies, auth=auth)
response2.raise_for_status()
```
Now launch MITM proxy on localhost
```
> mitmproxy -p 9000 --singleuser=frank:hunter2
```
This fails with 407 for me, and proxy logs only two requests
```
response2.raise_for_status()
File "----------", line 862, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 407 Client Error: Proxy Authentication Required for url: http://httpbin.org/relative-redirect/1
```
```
>> GET http://httpbin.org/
← 200 text/html 11.87kB 3.57MB/s
GET http://httpbin.org/redirect/2
← 302 text/html 247B 76.59kB/s
```
it does not log request to `Location`.
I see that putting credentials in proxies dictionary somehow fixes this issue when I use MITM proxy but it doesn't fix it for my production proxy (can't share code or proxy details here, need to check closer why it doesn't work for my proxy). I guess some details in setup of proxies might vary.
Is this a bug? I see some issues for proxy auth but they are mostly about HTTPS, not sure if someone reported this thing I describe here. Should this be fixed?
EDIT:
It looks like this always fails if proxy password is empty string.
change auth to
```python
auth = HTTPProxyAuth('frank', '')
proxies = {
"http": "http://frank:@localhost:9000"
}
```
will now always fail on redirect.
```python
auth = HTTPProxyAuth('frank', 'hunter2')
proxies = {
"http": "http://frank:hunter2@localhost:9000"
}
```
works fine on redirects, but seems somewhat duplicated.
I noticed this on Ubuntu 14.04, requests 2.11.1, python 2.7.6, mitmproxy 0.10.1 | null | https://github.com/psf/requests/pull/3660 | null | {'base_commit': '7eaa5ee37f2ef0fb37dc6e9efbead726665810b4', 'files': [{'path': 'requests/adapters.py', 'status': 'modified', 'Loc': {"('HTTPAdapter', 'proxy_headers', 353)": {'mod': [369]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {"('TestRequests', None, 55)": {'add': [1474]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/adapters.py"
],
"doc": [],
"test": [
"tests/test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
pandas-dev | pandas | 923ac2bdee409e4fa8c47414b07f52e036bb21bc | https://github.com/pandas-dev/pandas/issues/25828 | Docs
good first issue | Use Substitution Decorator for CustomBusinessMonthEnd | This is a follow up to https://github.com/pandas-dev/pandas/pull/21093/files#r188805397 which wasn't working with Py27. Now that that is a thing of the past we should be able to use the more idiomatic Substitution approach to generating this docstring
| null | https://github.com/pandas-dev/pandas/pull/25868 | null | {'base_commit': '923ac2bdee409e4fa8c47414b07f52e036bb21bc', 'files': [{'path': 'pandas/tseries/offsets.py', 'status': 'modified', 'Loc': {"('_CustomBusinessMonth', None, 972)": {'add': [979, 987, 988], 'mod': [974, 975, 981, 983, 985, 986]}, '(None, None, None)': {'add': [1054, 1061], 'mod': [18]}, "('CustomBusinessMonthEnd', None, 1055)": {'mod': [1056, 1057, 1058]}, "('CustomBusinessMonthBegin', None, 1062)": {'mod': [1063, 1064, 1065, 1066]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/tseries/offsets.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 59a240cd311f5cedbcd5e12421f1d3bd596d9070 | https://github.com/ansible/ansible/issues/71254 | easyfix
support:core
docs
affects_2.11 | Files contain broken references 404 | <!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
Files contain broken references (return 404):
- [ ] docs/docsite/rst/user_guide/collections_using.rst https://docs.ansible.com/collections/
- [x] docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_requirements.rst https://github.com/ansible/ansible/blob/devel/lib/ansible/modules/cloud/vmware/vmware_host_config_manager.py~
- [x] docs/docsite/rst/dev_guide/testing_units.rst https://github.com/ansible/ansible/blob/devel/test/units/modules/network/eos/test_eos_banner.py
- [x] docs/docsite/rst/porting_guides/porting_guide_base_2.11.rst
https://github.com/ansible/ansible/blob/stable-2.11/changelogs/CHANGELOG-v2.11.rst
- [x] docs/docsite/rst/scenario_guides/vmware_scenarios/vmware_external_doc_links.rst https://github.com/vmware/pyvmomi/tree/master/docs
- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.in
- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py
- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py
- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.aws/master/scripts/inventory/ec2.py
- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.ini
- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py
- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py
- [x] docs/docsite/rst/scenario_guides/guide_azure.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/azure_rm.py
- [x] docs/docsite/rst/user_guide/intro_dynamic_inventory.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/cobbler.py
- [x] docs/docsite/rst/scenario_guides/guide_infoblox.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.py
- [x] docs/docsite/rst/scenario_guides/guide_infoblox.rst https://raw.githubusercontent.com/ansible-collections/community.general/master/scripts/inventory/infoblox.yaml
- [ ] docs/docsite/rst/scenario_guides/guide_packet.rst https://support.packet.com/kb/articles/user-data
##### ISSUE TYPE
- Documentation Report
##### ANSIBLE VERSION
```
devel
``` | null | https://github.com/ansible/ansible/pull/71705 | null | {'base_commit': '59a240cd311f5cedbcd5e12421f1d3bd596d9070', 'files': [{'path': 'docs/docsite/rst/scenario_guides/guide_packet.rst', 'status': 'modified', 'Loc': {'(None, None, 126)': {'mod': [126]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docs/docsite/rst/scenario_guides/guide_packet.rst"
],
"test": [],
"config": [],
"asset": []
} | null |
ultralytics | yolov5 | f8464b4f66e627ed2778c9a27dbe4a8642482baf | https://github.com/ultralytics/yolov5/issues/2226 | bug | Yolov5 crashes with RTSP stream analysis | ## 🐛 Bug
If I want to analyze an rtsp stream with Yolov5 in a docker container, regardless the latest or the v4.0 version, it crashes.
## To Reproduce (REQUIRED)
Input:
```
docker run --rm -it -e RTSP_PROTOCOLS=tcp -p 8554:8554 aler9/rtsp-simple-server
ffmpeg -i video.mp4 -s 640x480 -c:v libx264 -f rtsp -rtsp_transport tcp rtsp://localhost:8554/analysis
docker run -it ultralytics/yolov5:latest
python3 detect.py --source rtsp://host.docker.internal:8554/analysis --weights yolov5s.pt --conf 0.25 --save-txt
```
Output:
```
Namespace(agnostic_nms=False, augment=False, classes=None, conf_thres=0.25, device='', exist_ok=False, img_size=640, iou_thres=0.45, name='exp', project='runs/detect', save_conf=False, save_txt=True, source='rtsp://host.docker.internal:8554/analysis', update=False, view_img=False, weights=['yolov5s.pt'])
/opt/conda/lib/python3.8/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at ../c10/cuda/CUDAFunctions.cpp:100.)
return torch._C._cuda_getDeviceCount() > 0
YOLOv5 v4.0-80-gf8464b4 torch 1.8.0a0+1606899 CPU
Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients, 17.0 GFLOPS
[h264 @ 0x55e674656100] co located POCs unavailable
[h264 @ 0x55e674656100] mmco: unref short failure
[h264 @ 0x55e675117cc0] co located POCs unavailable
[h264 @ 0x55e674dbb300] mmco: unref short failure
[h264 @ 0x55e674ec09c0] co located POCs unavailable
1/1: rtsp://host.docker.internal:8554/analysis... success (640x480 at 30.00 FPS).
0: 480x640 13 persons, 1 tennis racket, Done. (2.089s)
qt.qpa.xcb: could not connect to display
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "/opt/conda/lib/python3.8/site-packages/cv2/qt/plugins" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: xcb.
Aborted
```
## Expected behavior
Doing the analysis
## Environment
- OS: Yolov5 docker container on macos Catalina
- GPU none
| null | https://github.com/ultralytics/yolov5/pull/2231 | null | {'base_commit': 'f8464b4f66e627ed2778c9a27dbe4a8642482baf', 'files': [{'path': 'detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12, 13]}, "(None, 'detect', 18)": {'mod': [48, 121]}}}, {'path': 'utils/general.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [97]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"utils/general.py",
"detect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | 8fcdf3b60b2930a4273cab4e3df22b77680ff41d | https://github.com/ultralytics/yolov5/issues/6515 | bug | GPU Memory Leak on Loading Pre-Trained Checkpoint | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Training
### Bug
Training YOLO from a checkpoint (*.pt) consumes more GPU memory than training from a pre-trained weight (i.e. yolov5l).
### Environment
- YOLO: YOLOv5 (latest; how to check the yolo version?)
- CUDA: 11.6 (Tesla T4, 15360MiB)
- OS: Ubuntu 18.04.6 LTS (Bionic Beaver)
- Python: 3.8.12
### Minimal Reproducible Example
In the below training command, case 2 requires more GPU memory than case 1.
```
# 1. train from pre-trained model
train.py ... --weights yolov5l
# 2. train from pre-trained checkpoint
train.py ... --weights pre_trained_checkpoint.pt
```
### Additional
As reported on the pytorch forum[1], loading state dict on CUDA device causes memory leak. We should load it on CPU memory:
```python
state_dict = torch.load(directory, map_location=lambda storage, loc: storage)
```
- [1] https://discuss.pytorch.org/t/load-state-dict-causes-memory-leak/36189/5?u=bilzrd
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | null | https://github.com/ultralytics/yolov5/pull/6516 | null | {'base_commit': '8fcdf3b60b2930a4273cab4e3df22b77680ff41d', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 65)": {'mod': [123]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
sherlock-project | sherlock | 2a9297f2444f912c354168c6c0df1c782edace0e | https://github.com/sherlock-project/sherlock/issues/1189 | bug | Sites Giving 404 error or no profile | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm reporting a bug in Sherlock's functionality
- [ ] The bug I'm reporting is not a false positive or a false negative
- [ ] I've verified that I'm running the latest version of Sherlock
- [ ] I've checked for similar bug reports including closed ones
- [ ] I've checked for pull requests that attempt to fix this bug
## Description
<!--
Provide a detailed description of the bug that you have found in Sherlock.
Provide the version of Sherlock you are running.
-->
There are some sites which comes in result of matched Usernames but tends to give No Profile Page or a 404 Error,
those sites are below..
[+] Anilist: https://anilist.co/user/
[+] Coil: https://coil.com/u/
[+] RuneScape: https://apps.runescape.com/runemetrics/app/overview/player/
[+] TrackmaniaLadder: http://en.tm-ladder.com/_rech.php
[+] babyblogRU: https://www.babyblog.ru/user/info | null | https://github.com/sherlock-project/sherlock/pull/1192 | null | {'base_commit': '2a9297f2444f912c354168c6c0df1c782edace0e', 'files': [{'path': 'removed_sites.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1255]}}}, {'path': 'sherlock/resources/data.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [68, 69, 70, 71, 72, 73, 74, 75, 387, 388, 389, 390, 391, 392, 393, 394]}}}, {'path': 'sites.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [106], 'mod': [1, 11, 52]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sherlock/resources/data.json"
],
"doc": [
"removed_sites.md",
"sites.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
home-assistant | core | 9e41a37284b8796bf3a190fe4bd2a4aee8616ec2 | https://github.com/home-assistant/core/issues/55095 | integration: honeywell | Rate limiting in Honeywell TCC | ### The problem
Multiple Honeywell TCC users are reporting rate limit errors in #53981. Restarting HomeAssistant seems to temporarily clear it up
### What is version of Home Assistant Core has the issue?
2021.8.8
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
Honeywell Total Connect Comfort (US)
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/honeywell
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
```txt
2021-08-23 11:08:44 ERROR (MainThread) [homeassistant.helpers.entity] Update for climate.downstairs fails
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/components/honeywell/__init__.py", line 113, in update
await self._hass.async_add_executor_job(device.refresh)
File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.9/site-packages/somecomfort/client.py", line 87, in refresh
data = self._client._get_thermostat_data(self.deviceid)
File "/usr/local/lib/python3.9/site-packages/somecomfort/client.py", line 468, in _get_thermostat_data
return self._get_json(url)
File "/usr/local/lib/python3.9/site-packages/somecomfort/client.py", line 444, in _get_json
return self._request_json('get', *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/somecomfort/client.py", line 436, in _request_json
raise APIRateLimited()
somecomfort.client.APIRateLimited: You are being rate-limited. Try waiting a bit.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 446, in async_update_ha_state
await self.async_device_update()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 654, in async_device_update
raise exc
File "/usr/src/homeassistant/homeassistant/components/honeywell/climate.py", line 385, in async_update
await self._data.update()
File "/usr/src/homeassistant/homeassistant/components/honeywell/__init__.py", line 124, in update
result = await self._hass.async_add_executor_job(self._retry())
File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 52, in run
result = self.fn(*self.args, **self.kwargs)
TypeError: 'coroutine' object is not callable
```
```
### Additional information
_No response_ | null | https://github.com/home-assistant/core/pull/55304 | null | {'base_commit': '9e41a37284b8796bf3a190fe4bd2a4aee8616ec2', 'files': [{'path': 'homeassistant/components/honeywell/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1], 'mod': [12]}, "(None, 'async_setup_entry', 16)": {'mod': [45]}, "('HoneywellData', None, 68)": {'mod': [105, 111]}, "('HoneywellData', '_refresh_devices', 105)": {'mod': [108]}, "('HoneywellData', 'update', 111)": {'mod': [116, 127]}}}, {'path': 'homeassistant/components/honeywell/climate.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [109]}, "('HoneywellUSThermostat', 'async_update', 385)": {'mod': [387]}}}, {'path': 'tests/components/honeywell/test_init.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 8, 17]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"homeassistant/components/honeywell/__init__.py",
"homeassistant/components/honeywell/climate.py"
],
"doc": [],
"test": [
"tests/components/honeywell/test_init.py"
],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | f542c58a48e87878028b7639a3c0296bdb351071 | https://github.com/deepfakes/faceswap/issues/3 | dev
advuser | Improve command line usage | Adding a command line args parsing with an help would be great !
Preferably with `argparse` | null | https://github.com/deepfakes/faceswap/pull/13 | null | {'base_commit': 'f542c58a48e87878028b7639a3c0296bdb351071', 'files': [{'path': 'extract.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [1, 4, 6, 7, 9, 10, 11, 12, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 28, 30, 31, 32, 34, 35, 36, 37, 38, 39, 41, 42, 43, 44, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 63, 64, 65, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 78]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 4]}, "('FullPaths', None, 10)": {'mod': [10, 11, 12, 13]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": "Code"
} | {
"code": [
"extract.py",
"lib/utils.py",
"lib/faces_detect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
xai-org | grok-1 | e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8 | https://github.com/xai-org/grok-1/issues/14 | Grok implementation details | not an issue but would be nice if it was in the readme/model.py header:
314B parameters
Mixture of 8 Experts
2 experts used per token
64 layers
48 attention heads for queries
8 attention heads for keys/values
embeddings size: 6,144
rotary embeddings (RoPE)
SentencePiece tokenizer; 131,072 tokens
Supports activation sharding and 8-bit quantization
Max seq length (context): 8,192 tokens | null | https://github.com/xai-org/grok-1/pull/27 | null | {'base_commit': 'e50578b5f50e4c10c6e7cff31af1ef2bedb3beb8', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
pytorch | pytorch | a63524684d02131aef4f2e9d2cea7bfe210abc96 | https://github.com/pytorch/pytorch/issues/84408 | module: onnx
triaged
topic: bug fixes | Exporting the operator ::col2im to ONNX opset version 11 is not supported | ### 🐛 Describe the bug
When I converted the model in “.pt” format to onnx format, I received an error that the operator col2im is not supported.
## code
import torch
from cvnets import get_model
from options.opts import get_segmentation_eval_arguments
def pt2onnx():
opts = get_segmentation_eval_arguments()
model = get_model(opts)
model.eval()
onnx_save_path = "model/mobilevit.onnx"
in_data = torch.randn(1, 3, 512, 512)
torch.onnx.export(model,
in_data,
onnx_save_path,
opset_version=11,
do_constant_folding=True,
input_names=["in"],
output_names=["out"])
return
if __name__ == '__main__':
pt2onnx()
## error
Traceback (most recent call last):
File "/home/sunseeker/project/robot_seg/code/mobilevit_seg/demo.py", line 20, in <module>
pt2onnx()
File "/home/sunseeker/project/robot_seg/code/mobilevit_seg/demo.py", line 13, in pt2onnx
torch.onnx.export(model, in_data, onnx_save_path, opset_version=11, do_constant_folding=True, input_names=["in"],
File "/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/__init__.py", line 350, in export
return utils.export(
File "/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py", line 163, in export
_export(
File "/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py", line 1074, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py", line 731, in _model_to_graph
graph = _optimize_graph(
File "/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py", line 308, in _optimize_graph
graph = _C._jit_pass_onnx(graph, operator_export_type)
File "/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/__init__.py", line 416, in _run_symbolic_function
return utils._run_symbolic_function(*args, **kwargs)
File "/opt/anaconda3/envs/mobilevit/lib/python3.10/site-packages/torch/onnx/utils.py", line 1421, in _run_symbolic_function
raise symbolic_registry.UnsupportedOperatorError(
**torch.onnx.symbolic_registry.UnsupportedOperatorError: Exporting the operator ::col2im to ONNX opset version 11 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub.**
## ENV
PyTorch version: 1.12.1
Is debug build: False
CUDA used to build PyTorch: 10.2
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.1 LTS (x86_64)
GCC version: (Ubuntu 11.2.0-19ubuntu1) 11.2.0
Clang version: Could not collect
CMake version: version 3.22.1
Libc version: glibc-2.35
Python version: 3.10.4 (main, Mar 31 2022, 08:41:55) [GCC 7.5.0] (64-bit runtime)
Python platform: Linux-5.15.0-47-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: Could not collect
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 3050
Nvidia driver version: 510.85.02
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.1
[pip3] pytorchvideo==0.1.5
[pip3] torch==1.12.1
[pip3] torchaudio==0.12.1
[pip3] torchvision==0.13.1
[conda] blas 1.0 mkl
[conda] cudatoolkit 10.2.89 hfd86e86_1
[conda] ffmpeg 4.3 hf484d3e_0 pytorch
[conda] mkl 2021.4.0 h06a4308_640
[conda] mkl-service 2.4.0 py310h7f8727e_0
[conda] mkl_fft 1.3.1 py310hd6ae3a3_0
[conda] mkl_random 1.2.2 py310h00e6091_0
[conda] numpy 1.23.1 py310h1794996_0
[conda] numpy-base 1.23.1 py310hcba007f_0
[conda] pytorch 1.12.1 py3.10_cuda10.2_cudnn7.6.5_0 pytorch
[conda] pytorch-mutex 1.0 cuda pytorch
[conda] pytorchvideo 0.1.5 pypi_0 pypi
[conda] torchaudio 0.12.1 py310_cu102 pytorch
[conda] torchvision 0.13.1 py310_cu102 pytorch
| null | null | https://github.com/pytorch/pytorch/commit/a63524684d02131aef4f2e9d2cea7bfe210abc96 | {'base_commit': 'a63524684d02131aef4f2e9d2cea7bfe210abc96', 'files': [{'path': 'test/onnx/test_pytorch_onnx_no_runtime.py', 'status': 'modified', 'Loc': {"('TestONNXExport', None, 79)": {'add': [1158]}}}, {'path': 'test/onnx/test_pytorch_onnx_onnxruntime.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [47]}}}, {'path': 'torch/csrc/jit/serialization/export.cpp', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [84], 'mod': [62]}}}, {'path': 'torch/onnx/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27, 64]}}}, {'path': 'torch/onnx/_constants.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [7]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"torch/onnx/_constants.py",
"torch/onnx/__init__.py",
"torch/csrc/jit/serialization/export.cpp"
],
"doc": [],
"test": [
"test/onnx/test_pytorch_onnx_onnxruntime.py",
"test/onnx/test_pytorch_onnx_no_runtime.py"
],
"config": [],
"asset": []
} | null |
Z4nzu | hackingtool | 64a46031b9c22e2a0526d0216eef627a91da880d | https://github.com/Z4nzu/hackingtool/issues/384 | install error | Traceback (most recent call last):
File "/usr/share/hackingtool/hackingtool.py", line 106, in <module>
os.mkdir(archive)
FileNotFoundError: [Errno 2] No such file or directory: ''
and i was in root mode also but this showing what to do help | null | https://github.com/Z4nzu/hackingtool/pull/387 | null | {'base_commit': '64a46031b9c22e2a0526d0216eef627a91da880d', 'files': [{'path': 'hackingtool.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [105, 106]}}}, {'path': 'tools/others/socialmedia.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, "('Faceshell', 'run', 48)": {'mod': [51]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"hackingtool.py",
"tools/others/socialmedia.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
ultralytics | yolov5 | b754525e99ca62424c484fe529b6142f6bab939e | https://github.com/ultralytics/yolov5/issues/5160 | bug
Stale | Docker Multi-GPU DDP training hang on `destroy_process_group()` with `wandb` option 3 | Hello, when I try to training using multi gpu based on docker file images. I got the below error. I use Ubuntu 18.04, python 3.8.
<<<<<<<<<<<<<<<<<ERROR>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
```
root@5a70a5f2d489:/usr/src/app# python -m torch.distributed.run --nproc_per_node 2 train.py --batch 64 --data data.yaml --weights yolov5s.pt --device 0,1
WARNING:__main__:*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
Traceback (most recent call last):
File "train.py", line 620, in <module>
main(opt)
File "train.py", line 497, in main
check_file(opt.data), check_yaml(opt.cfg), check_yaml(opt.hyp), str(opt.weights), str(opt.project) # checks
File "/usr/src/app/utils/general.py", line 326, in check_file
assert len(files), f'File not found: {file}' # assert file was found
AssertionError: File not found: data.yaml
wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results
wandb: Enter your choice: (30 second timeout) ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 405) of binary: /opt/conda/bin/python
/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py:367: UserWarning:
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
Child process 405 (local_rank 1) FAILED (exitcode 1)
Error msg: Process failed with exitcode 1
Without writing an error file to <N/A>.
While this DOES NOT affect the correctness of your application,
no trace information about the error will be available for inspection.
Consider decorating your top level entrypoint function with
torch.distributed.elastic.multiprocessing.errors.record. Example:
from torch.distributed.elastic.multiprocessing.errors import record
@record
def trainer_main(args):
# do train
**********************************************************************
warnings.warn(_no_error_file_warning_msg(rank, failure))
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 702, in <module>
main()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 361, in wrapper
return f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 698, in main
run(args)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 689, in run
elastic_launch(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 244, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
***************************************
train.py FAILED
=======================================
Root Cause:
[0]:
time: 2021-10-13_04:30:25
rank: 1 (local_rank: 1)
exitcode: 1 (pid: 405)
error_file: <N/A>
msg: "Process failed with exitcode 1"
=======================================
Other Failures:
<NO_OTHER_FAILURES>
***************************************
root@5a70a5f2d489:/usr/src/app#
``` | null | https://github.com/ultralytics/yolov5/pull/5163 | null | {'base_commit': 'b754525e99ca62424c484fe529b6142f6bab939e', 'files': [{'path': 'utils/loggers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 8, 17, 22]}}}, {'path': 'utils/loggers/wandb/wandb_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24, 25, 27, 28, 29, 30, 31]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"utils/loggers/wandb/wandb_utils.py",
"utils/loggers/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 4c77f62f806567644571b6b3f496f7b332b12327 | https://github.com/AntonOsika/gpt-engineer/issues/656 | Remove unnecessary configs such as: tdd, tdd_plus, clarify, respec | If we have time: benchmark them and store insights before deletion | null | https://github.com/AntonOsika/gpt-engineer/pull/737 | null | {'base_commit': '4c77f62f806567644571b6b3f496f7b332b12327', 'files': [{'path': 'gpt_engineer/preprompts/fix_code', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/preprompts/spec', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/preprompts/unit_tests', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/steps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [60, 395, 422, 423, 424, 425, 426, 427, 428, 429, 430, 431, 432, 433, 434, 435, 436, 437, 438]}, "(None, 'gen_spec', 121)": {'mod': [121, 122, 123, 124, 125, 126, 127, 128, 129, 131, 133, 135, 138, 139, 140, 141, 142, 143, 144, 145, 146, 148, 150, 151, 153]}, "(None, 'gen_code_after_unit_tests', 175)": {'mod': [175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189]}, "(None, 'fix_code', 354)": {'mod': [354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367]}, "('Config', None, 378)": {'mod': [383, 384]}}}, {'path': 'tests/test_collect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [11, 12]}, "(None, 'test_collect_learnings', 15)": {'mod': [21, 30, 31, 32, 33, 34]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"gpt_engineer/steps.py"
],
"doc": [],
"test": [
"tests/test_collect.py"
],
"config": [],
"asset": [
"gpt_engineer/preprompts/unit_tests",
"gpt_engineer/preprompts/fix_code",
"gpt_engineer/preprompts/spec"
]
} | 1 | |
OpenInterpreter | open-interpreter | d57ed889c27d5e95e39ea7db59fe518b5f18f942 | https://github.com/OpenInterpreter/open-interpreter/issues/209 | Bug | UnicodeDecodeError - help will be appriciate! | _Exception in thread Thread-1 (save_and_display_stream):
Traceback (most recent call last):
File "C:\Users\ziv\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Users\ziv\AppData\Local\Programs\Python\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\ziv\AppData\Local\Programs\Python\Python311\Lib\site-packages\interpreter\code_interpreter.py", line
293, in save_and_display_stream
for line in iter(stream.readline, ''):
File "C:\Users\ziv\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1255.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x8e in position 3284: character maps to <undefined>_
I am a Windows User, running with Conda, on Python version 3.11.2
should I change the encoding? | null | https://github.com/OpenInterpreter/open-interpreter/pull/742 | null | {'base_commit': 'd57ed889c27d5e95e39ea7db59fe518b5f18f942', 'files': [{'path': 'interpreter/code_interpreters/subprocess_code_interpreter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('SubprocessCodeInterpreter', 'start_process', 39)": {'add': [42, 50]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"interpreter/code_interpreters/subprocess_code_interpreter.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | d9e798b48f62fdc2b604a84c36eb83c985f87754 | https://github.com/ansible/ansible/issues/82683 | bug
has_pr
P3
affects_2.13
affects_2.16 | ansible fact_cache permissions changed after ansible-core update | ### Summary
After update to ansible-core 2.13.2 or higher (It is still an issue with 2.16.3), the default permission of ansible fact cache files changed.
ansible-core 2.13.1 is OK and uses 0644 on the fact files. 2.13.2 and higher uses 0600.
I could not figure out how to change the behavior back.
We need read permission for the group per default.
This is a breaking change for us.
I did not find a hint in the release notes, so I assume this is a bug
https://github.com/ansible/ansible/compare/v2.13.1...v2.13.2
We have a multi user system, and now ansible user cannot read the cache if a ansible run has been executed by another user.
### Issue Type
Bug Report
### Component Name
cache
### Ansible Version
(EDIT: section updated for bot to detect latest version, tested with 2.13.1,2.13.2 and 2.16.3)
```console
$ ansible --version
(ansible-venv-old) xxx:~> ansible --version
ansible [core 2.16.3]
config file = None
configured module search path = ['/home/xxx/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /home/xxx/ansible-venv-old/lib64/python3.11/site-packages/ansible
ansible collection location = /home/xxx/.ansible/collections:/usr/share/ansible/collections
executable location = /home/xxx/ansible-venv-old/bin/ansible
python version = 3.11.5 (main, Sep 06 2023, 11:21:05) [GCC] (/home/xxx/ansible-venv-old/bin/python3.11)
jinja version = 3.1.3
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
CACHE_PLUGIN(env: ANSIBLE_CACHE_PLUGIN) = jsonfile
CACHE_PLUGIN_CONNECTION(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /home/xxx/facts_cache
CACHE:
=====
jsonfile:
________
_uri(env: ANSIBLE_CACHE_PLUGIN_CONNECTION) = /home/xxx/facts_cache
```
### OS / Environment
It is reproducible in with python3.11 venv
Linux
### Steps to Reproduce
After update to 2.13.2, the facts files have 600.
```
(ansible-venv-old) xxx:~> pip install ansible-core==2.13.2
...
xxx:~> rm -r "$HOME/facts_cache"
xxx:~> export ANSIBLE_CACHE_PLUGIN=jsonfile
xxx:~> export ANSIBLE_CACHE_PLUGIN_CONNECTION="$HOME/facts_cache"
xxx:~> ansible -m setup localhost > /dev/null
xxx:~> ls -lisa facts_cache/
total 64
535518 0 drwxr-xr-x 2 xxx yyy 23 Feb 8 13:00 .
262283 0 drwx------ 6 xxx yyy 247 Feb 8 13:00 ..
535519 64 -rw------- 1 xxx yyy 65091 Feb 8 13:00 localhost
```
### Expected Results
With 2.13.1, the permission on the fact file are 644:
```
(ansible-venv-old) xxx:~> ansible --version | head -1
ansible [core 2.13.1]
xxx:~> rm -r "$HOME/facts_cache"
xxx:~> export ANSIBLE_CACHE_PLUGIN=jsonfile
xxx:~> export ANSIBLE_CACHE_PLUGIN_CONNECTION="$HOME/facts_cache"
xxx:~> ansible -m setup localhost > /dev/null
xxx:~> ls -lisa facts_cache/
total 64
535518 0 drwxr-xr-x 2 xxx yyy 23 Feb 8 12:54 .
262283 0 drwx------ 6 xxx yyy 247 Feb 8 12:54 ..
535519 64 -rw-r--r-- 1 xxx yyy 63445 Feb 8 12:54 localhost
```
### Actual Results
After update to 2.13.2 or higher (even latest 2.16.3), the facts files have 600.
See steps to reproduce
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | null | https://github.com/ansible/ansible/pull/82761 | null | {'base_commit': 'd9e798b48f62fdc2b604a84c36eb83c985f87754', 'files': [{'path': 'lib/ansible/plugins/cache/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [30]}, "('BaseFileCacheModule', 'set', 154)": {'add': [166]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/plugins/cache/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AUTOMATIC1111 | stable-diffusion-webui | 98947d173e3f1667eba29c904f681047dea9de90 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6010 | bug-report | [Bug]: Extension Updates Overwrite with a git reset --hard | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
I can't rely on users config files not being overwritten. If I use `install.py` to rename them, `install.py` does not run until next cold boot. This causes the extension to not run when first installed. I can probably come with another workaround, like hardcoding the modifications in the apps script.
I shouldn't be having to try workarounds when someone else can't bother to just chmod their files.
https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4646#issuecomment-1364629164
### Steps to reproduce the problem
It's in the code:

### What should have happened?
Config files to run the extensions should not be overwritten.
### Commit where the problem happens
current to aee611adb874fbabcdeea154a35908ae1f9a4bbf
### What platforms do you use to access UI ?
Windows
### What browsers do you use to access the UI ?
Mozilla Firefox, Google Chrome
### Command Line Arguments
_No response_
### Additional information, context and logs
https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4646
https://github.com/Gerschel/sd_web_ui_preset_utils/issues/23 | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/4646 | null | {'base_commit': '98947d173e3f1667eba29c904f681047dea9de90', 'files': [{'path': 'modules/extensions.py', 'status': 'modified', 'Loc': {"('Extension', None, 17)": {'mod': [68]}, "('Extension', 'pull', 68)": {'mod': [70]}}}, {'path': 'modules/ui_extensions.py', 'status': 'modified', 'Loc': {"(None, 'apply_and_restart', 23)": {'mod': [39, 41]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"modules/extensions.py",
"modules/ui_extensions.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.