organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ansible | ansible | f8d20f970f16806aee1ef555f9f2db115cec7f34 | https://github.com/ansible/ansible/issues/36293 | cloud
aws
module
affects_2.4
support:core
bug | Add support for Timeout (--timeout-in-minutes) parameter in Cloudformation module | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
Cloudformation module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
ansible 2.4.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 20 2017, 18:23:56) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
No changes
##### OS / ENVIRONMENT
Ubuntu 16.04
##### SUMMARY
I believe this is a bug, that Ansible Cloudformation module does not support important Timeout parameter (--timeout-in-minutes key in aws-cli 'create-stack' call).
(Documentation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/cfn-console-add-tags.html )
##### STEPS TO REPRODUCE
Check module documentation
##### EXPECTED RESULTS
Timeout parameter is supported
##### ACTUAL RESULTS
Timeout parameter is not supported | null | https://github.com/ansible/ansible/pull/36445 | null | {'base_commit': 'f8d20f970f16806aee1ef555f9f2db115cec7f34', 'files': [{'path': 'lib/ansible/modules/cloud/amazon/cloudformation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32, 209]}, "(None, 'create_stack', 300)": {'add': [306], 'mod': [304, 305]}, "(None, 'main', 535)": {'add': [544]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/cloud/amazon/cloudformation.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | abb31d0a7ca769a1e6406553a58a7fb0bd3b259a | https://github.com/scikit-learn/scikit-learn/issues/4744 | Bug | Bug with using TreeClassifier with OOB score and sparse matrices | When using the ExtraTreesClassifier (and likely other classes that are derived from BaseTreeClassifier), there is a problem when using sparsematrices: `ValueError: X should be in csr_matrix format, got <class 'scipy.sparse.csc.csc_matrix'>`.
I tracked the issue down to the following lines:
On line 195 of forest.py the sparse matrix is changed to a csc matrix:
`X = check_array(X, dtype=DTYPE, accept_sparse="csc")`
However on line 369 of forest.py, the following is call is made with `check_input=false`:
`p_estimator = estimator.predict_proba(X[mask_indices, :], check_input=False)`
This leads to a ValueError in predict `ValueError: X should be in csr_matrix format, got <class 'scipy.sparse.csc.csc_matrix'>`.
Changing check_input to True seems to fix the issue. It's probably best to also include a test case for this problem, I just made a quick PR with only the False -> True fix.
| null | https://github.com/scikit-learn/scikit-learn/pull/4954 | null | {'base_commit': 'abb31d0a7ca769a1e6406553a58a7fb0bd3b259a', 'files': [{'path': 'doc/whats_new.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [114]}}}, {'path': 'sklearn/ensemble/forest.py', 'status': 'modified', 'Loc': {"('ForestClassifier', '_set_oob_score', 374)": {'add': [375]}, "('ForestRegressor', '_set_oob_score', 659)": {'add': [660]}}}, {'path': 'sklearn/ensemble/tests/test_forest.py', 'status': 'modified', 'Loc': {"(None, 'test_oob_score', 261)": {'add': [264]}, '(None, None, None)': {'add': [270]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/ensemble/forest.py"
],
"doc": [
"doc/whats_new.rst"
],
"test": [
"sklearn/ensemble/tests/test_forest.py"
],
"config": [],
"asset": []
} | 1 |
fastapi | fastapi | 543ef7753aff639ad3aed7c153e42f719e361d38 | https://github.com/fastapi/fastapi/issues/737 | bug
answered
reviewed | dependency_overrides does not play well with scopes | **Describe the bug**
When working with `Security()` dependencies, the scopes disappear when `app.dependency_overrides` is executed. The callable dealing with the scopes gets an empty list instead of the scopes.
**To Reproduce**
```python
from fastapi import FastAPI, Header, Security, Depends
from fastapi.security import SecurityScopes
from starlette.testclient import TestClient
app = FastAPI()
def get_user(required_scopes: SecurityScopes):
print(required_scopes.scopes)
return "John Doe"
def data():
return [1,2,3]
def other_data():
return [3,4,5]
@app.get("/test")
def test(user: str = Security(get_user, scopes=["foo", "bar"]), data = Depends(data)):
return data
client = TestClient(app)
response = client.get("/test")
app.dependency_overrides[data] = other_data
response = client.get("/test")
# prints: ["foo", "bar"] and [] instead of ["foo", "bar"] and ["foo", "bar"]
```
**Expected behavior**
In the above example I expect `get_user()` to print the same scopes twice. Instead, before the `dependency_overrides` it prints the correct scpoes, but an empty list afterwards.
**Environment:**
- OS: Linux
- FastAPI Version 0.43.0
- Python 3.7.4
| null | https://github.com/fastapi/fastapi/pull/1549 | null | {'base_commit': '543ef7753aff639ad3aed7c153e42f719e361d38', 'files': [{'path': 'fastapi/dependencies/utils.py', 'status': 'modified', 'Loc': {"(None, 'solve_dependencies', 432)": {'add': [480]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"fastapi/dependencies/utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | 1e95337f3aec4c12244802bb6e493b07b27aa795 | https://github.com/ultralytics/yolov5/issues/459 | bug | custom anchors get flushed when loading pretrain weights | Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following, otherwise it is non-actionable, and we can not help you:
- **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo
- **Common dataset**: coco.yaml or coco128.yaml
- **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#reproduce-our-environment
If this is a custom dataset/training question you **must include** your `train*.jpg`, `test*.jpg` and `results.png` figures, or we can not help you. You can generate these with `utils.plot_results()`.
## 🐛 Bug
in train.py , the anchors set by user in yaml file are flushed by pretrain weights.
```
if weights.endswith('.pt'): # pytorch format
ckpt = torch.load(weights, map_location=device) # load checkpoint
# load model
try:
ckpt['model'] = {k: v for k, v in ckpt['model'].float().state_dict().items()
if model.state_dict()[k].shape == v.shape} # to FP32, filter
#print(ckpt['model'].keys())
**#ckpt['model'].pop('model.27.anchors')
#ckpt['model'].pop('model.27.anchor_grid')**
model.load_state_dict(ckpt['model'], strict=False)
except KeyError as e:
s = "%s is not compatible with %s. This may be due to model differences or %s may be out of date. " \
"Please delete or update %s and try again, or use --weights '' to train from scratch." \
% (opt.weights, opt.cfg, opt.weights, opt.weights)
raise KeyError(s) from e
```
## To Reproduce (REQUIRED)
Input:
in ./model/yolov5x.yaml
change anchors' shape to any other than default.
Output:
the anchors set in yaml file didn't activated .
## Expected behavior
A clear and concise description of what you expected to happen.
## Environment
If applicable, add screenshots to help explain your problem.
- OS: [Ubuntu]
- GPU [2080 Ti]
## Additional context
if the anchors set by user in yaml file, is more than 9 anchors, the bug didn't get triggered because it did not match the pretrain weight's anchors' shape.
| null | https://github.com/ultralytics/yolov5/pull/462 | null | {'base_commit': '1e95337f3aec4c12244802bb6e493b07b27aa795', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 46)": {'add': [132, 135], 'mod': [134]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 65d7a9b9902ad85f27b17d759bd13b59c2afc474 | https://github.com/AntonOsika/gpt-engineer/issues/589 | "No API key provided" - altough it is provided in the .env file | ## Expected Behavior
If the OpenAI API key is provided in the .env file, it should be recognized and used.
## Current Behavior
Runtime error message: openai.error.AuthenticationError: No API key provided.
### Steps to Reproduce
1. Set the key in the .env file
2. Run the app with gpt-engineer projects/my-new-project
### Solution
When I added the line `openai.api_key = os.getenv("OPENAI_API_KEY")` to the end of the function `load_env_if_needed()` in the file `main.py`, as well as `import openai` at the beginning of this file _(thanks, engerlina, for reminder)_, the issue was resolved. | null | https://github.com/AntonOsika/gpt-engineer/pull/592 | null | {'base_commit': '65d7a9b9902ad85f27b17d759bd13b59c2afc474', 'files': [{'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'load_env_if_needed', 19)": {'add': [21]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"gpt_engineer/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | 57dc58123b98e2026025cc87bdee474bf0656dcb | https://github.com/scrapy/scrapy/issues/4976 | bug
Windows | Fix and document asyncio reactor problems on Windows | As described in https://twistedmatrix.com/trac/ticket/9766 you cannot just enable AsyncioSelectorReactor on Windows with recent Python, you either need fixed Twisted (which is not released yet, the merged fix is https://github.com/twisted/twisted/pull/1338) or, supposedly, add some manual fix as documented [here](https://github.com/twisted/twisted/blob/09b96850c2ebcb635f448ed3f9bbf5f157be3693/src/twisted/internet/asyncioreactor.py#L35-L44). So if it's possible to add this code to Scrapy we should probably do that, at least until the next Twisted release, and even after it we should document that new enough Twisted is needed in this use case. | null | https://github.com/scrapy/scrapy/pull/5315 | null | {'base_commit': '57dc58123b98e2026025cc87bdee474bf0656dcb', 'files': [{'path': '.github/workflows/tests-windows.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25]}}}, {'path': 'docs/topics/asyncio.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38]}}}, {'path': 'scrapy/utils/reactor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, "(None, 'install_reactor', 53)": {'add': [59]}}}, {'path': 'tests/CrawlerProcess/asyncio_enabled_reactor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 3]}}}, {'path': 'tests/test_commands.py', 'status': 'modified', 'Loc': {"('RunSpiderCommandTest', None, 557)": {'mod': [677, 678, 679, 702, 703, 704]}, "('RunSpiderCommandTest', 'test_custom_asyncio_loop_enabled_false', 705)": {'mod': [710]}}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [7]}, "('CrawlerRunnerHasSpider', None, 231)": {'mod': [287, 288, 289]}, "('CrawlerProcessSubprocess', None, 323)": {'mod': [331, 332, 333, 339, 340, 341, 380, 381, 382, 407, 408, 409]}}}, {'path': 'tests/test_downloader_handlers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "('HttpTestCase', None, 209)": {'add': [289, 298]}, "('FTPTestCase', None, 1055)": {'add': [1057]}}}, {'path': 'tests/test_utils_asyncio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 2, 3]}, "('AsyncioTest', None, 11)": {'mod': [17, 18, 19]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/utils/reactor.py",
"tests/CrawlerProcess/asyncio_enabled_reactor.py"
],
"doc": [
"docs/topics/asyncio.rst"
],
"test": [
"tests/test_utils_asyncio.py",
"tests/test_crawler.py",
"tests/test_commands.py",
"tests/test_downloader_handlers.py"
],
"config": [
".github/workflows/tests-windows.yml"
],
"asset": []
} | 1 |
pandas-dev | pandas | 9b4dfa195e3f23d81389745c26bff8e0087e74b0 | https://github.com/pandas-dev/pandas/issues/22046 | Bug
Indexing | Replacing multiple columns (or just one) with iloc does not work | #### Code Sample, a copy-pastable example if possible
```python
import pandas
columns = pandas.DataFrame({'a2': [11, 12, 13], 'b2': [14, 15, 16]})
inputs = pandas.DataFrame({'a1': [1, 2, 3], 'b1': [4, 5, 6], 'c1': [7, 8, 9]})
inputs.iloc[:, [1]] = columns.iloc[:, [0]]
print(inputs)
```
#### Problem description
I have a code which is replacing a set of columns with another set of columns, based on column indices. To make things done without a special case, I assumes I could just use `iloc` to both select and set columns in a DataFrame. But it seems that this not work and fails in strange ways.
#### Expected Output
```
a1 b1 c1
0 1 11 7
1 2 12 8
2 3 13 9
```
But in reality, you get:
```
a1 b1 c1
0 1.0 NaN 7.0
1 2.0 NaN 8.0
2 3.0 NaN 9.0
```
See how values converted to float and how column is `NaN`s?
But, if I do the following I get expected results:
```
inputs.iloc[:, [1]] = [[11], [12], [13]]
```
This also works:
```
inputs.iloc[:, [1]] = columns.iloc[:, [0]].values
```
So if it works with lists and ndarrays, one would assume it would also work with DataFrames themselves. But it does not.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.6.3.final.0
python-bits: 64
OS: Linux
OS-release: 4.13.0-46-generic
machine: x86_64
processor: x86_64
byteorder: little
LC_ALL: None
LANG: en_US.UTF-8
LOCALE: en_US.UTF-8
pandas: 0.23.3
pytest: None
pip: 18.0
setuptools: 40.0.0
Cython: None
numpy: 1.15.0
scipy: None
pyarrow: None
xarray: None
IPython: None
sphinx: None
patsy: None
dateutil: 2.7.3
pytz: 2018.5
blosc: None
bottleneck: None
tables: None
numexpr: None
feather: None
matplotlib: None
openpyxl: None
xlrd: None
xlwt: None
xlsxwriter: None
lxml: None
bs4: None
html5lib: None
sqlalchemy: None
pymysql: None
psycopg2: None
jinja2: None
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| null | https://github.com/pandas-dev/pandas/pull/37728 | null | {'base_commit': '9b4dfa195e3f23d81389745c26bff8e0087e74b0', 'files': [{'path': 'doc/source/whatsnew/v1.2.0.rst', 'status': 'modified', 'Loc': {'(None, None, 591)': {'add': [591]}}}, {'path': 'pandas/core/indexing.py', 'status': 'modified', 'Loc': {"('_LocationIndexer', '__setitem__', 675)": {'mod': [684]}, "('_iLocIndexer', None, 1322)": {'mod': [1520, 1631, 1717, 1790]}, "('_iLocIndexer', '_setitem_with_indexer', 1520)": {'mod': [1596, 1627, 1629]}, "('_iLocIndexer', '_setitem_with_indexer_split_path', 1631)": {'mod': [1645, 1660]}, "('_iLocIndexer', '_setitem_with_indexer_frame_value', 1717)": {'mod': [1727]}, "('_iLocIndexer', '_setitem_single_block', 1790)": {'mod': [1819, 1825]}, "('_iLocIndexer', '_setitem_with_indexer_missing', 1836)": {'mod': [1857]}}}, {'path': 'pandas/tests/frame/indexing/test_setitem.py', 'status': 'modified', 'Loc': {"('TestDataFrameSetItem', None, 24)": {'mod': [292, 293, 294, 295, 296, 297, 298, 299]}}}, {'path': 'pandas/tests/indexing/test_iloc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [803]}, "('TestILocSeries', 'test_iloc_getitem_nonunique', 966)": {'add': [968]}}}, {'path': 'pandas/tests/indexing/test_indexing.py', 'status': 'modified', 'Loc': {"('TestMisc', 'test_rhs_alignment', 668)": {'mod': [671, 690, 696, 697, 700, 703, 707]}, "('TestMisc', 'run_tests', 671)": {'mod': [678, 682, 686]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/indexing.py"
],
"doc": [
"doc/source/whatsnew/v1.2.0.rst"
],
"test": [
"pandas/tests/frame/indexing/test_setitem.py",
"pandas/tests/indexing/test_indexing.py",
"pandas/tests/indexing/test_iloc.py"
],
"config": [],
"asset": []
} | null |
geekan | MetaGPT | 5446c7e490e7203c61b2ff31181551b2c0f4a86b | https://github.com/geekan/MetaGPT/issues/1430 | DO NOT FORCE VALIDATE '{'Required Python packages'}' by default | **Bug description**
`metagpt\actions\action_node.py", line 432, in _aask_v1
instruct_content = output_class(**parsed_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "..........\Lib\site-packages\pydantic\main.py", line 171, in __init__
self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for PM_NODE_AN
Value error, Missing fields: {'Required Python packages'} `
**Bug solved method**
DO NOT VALIDATE THIS FIELD. user may ask the agents to do non py related stuff,why would we force this validate and introduce a hard error? Seems silly.
**Environment information**
irrelevant
- LLM type and model name:
- MetaGPT version or branch:0.8.1
**Screenshots or logs**
`action_node.py", line 432, in _aask_v1
instruct_content = output_class(**parsed_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".........\Lib\site-packages\pydantic\main.py", line 171, in __init__
self.__pydantic_validator__.validate_python(data, self_instance=self)
pydantic_core._pydantic_core.ValidationError: 1 validation error for PM_NODE_AN
Value error, Missing fields: {'Required Python packages'} [type=value_error, input_value={'Required Rust packages'...ption for backup data.'}, input_type=dict]
For further information visit https://errors.pydantic.dev/2.6/v/value_error
The above exception was the direct cause of the following exception:
Traceback (most recent call last):` | null | https://github.com/FoundationAgents/MetaGPT/pull/1435 | null | {'base_commit': '5446c7e490e7203c61b2ff31181551b2c0f4a86b', 'files': [{'path': 'metagpt/actions/design_api_an.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47], 'mod': [8, 50, 69]}}}, {'path': 'metagpt/actions/project_management_an.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8, 14]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"metagpt/actions/design_api_an.py",
"metagpt/actions/project_management_an.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
CorentinJ | Real-Time-Voice-Cloning | 5425557efe30863267f805851f918124191e0be0 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/447 | dependencies | Pytorch synthesizer | Splitting this off from #370, which will remain for tensorflow2 conversion. I would prefer this route if we can get it to work. Asking for help from the community on this one.
One example of a pytorch-based tacotron is: https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2
Another option is to manually convert the code and pretrained models which would be extremely time-consuming, but also an awesome learning experience. | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472 | null | {'base_commit': '5425557efe30863267f805851f918124191e0be0', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [18, 23, 24, 65, 66, 68, 70]}}}, {'path': 'demo_cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13, 43, 162], 'mod': [24, 25, 26, 30, 31, 32, 70, 76]}}}, {'path': 'demo_toolbox.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 32], 'mod': [23, 24, 25]}}}, {'path': 'encoder/audio.py', 'status': 'modified', 'Loc': {"(None, 'preprocess_wav', 19)": {'mod': [20, 43, 44]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16], 'mod': [1]}}}, {'path': 'requirements_gpu.txt', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/LICENSE.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 4]}}}, {'path': 'synthesizer/audio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}}}, {'path': 'synthesizer/feeder.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/hparams.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [348], 'mod': [1, 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 25, 26, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 39, 40, 41, 42, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 105, 106, 107, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 121, 122, 123, 124, 125, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 143, 144, 145, 146, 147, 149, 150, 151, 152, 153, 154, 155, 157, 158, 159, 160, 161, 162, 164, 165, 166, 167, 168, 169, 170, 172, 174, 175, 176, 177, 178, 180, 181, 182, 183, 184, 185, 186, 187, 189, 190, 191, 192, 193, 194, 196, 197, 198, 199, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 216, 217, 218, 219, 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 231, 232, 233, 234, 235, 237, 238, 239, 240, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 255, 256, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 269, 270, 271, 272, 273, 274, 275, 276, 278, 279, 280, 281, 283, 284, 285, 286, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 336, 337, 338, 339, 342, 343, 344, 345, 347]}, "(None, 'hparams_debug_string', 350)": {'mod': [351, 352, 353]}}}, {'path': 'synthesizer/inference.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6], 'mod': [1, 2, 3, 4, 5, 9, 11]}, "('Synthesizer', '__init__', 19)": {'add': [33], 'mod': [21, 22, 24, 25, 26, 27, 28, 29, 30, 31, 32, 35, 36, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 59]}, "('Synthesizer', 'griffin_lim', 149)": {'add': [154]}, "('Synthesizer', None, 15)": {'mod': [19, 106, 107, 108, 109, 110, 111, 113, 114, 116, 117, 118, 119, 121]}, "('Synthesizer', 'is_loaded', 61)": {'mod': [63]}, "('Synthesizer', 'load', 67)": {'mod': [69, 70, 71, 72, 73, 74, 75]}, "('Synthesizer', 'synthesize_spectrograms', 77)": {'mod': [91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 104]}}}, {'path': 'synthesizer/infolog.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/__init__.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/architecture_wrappers.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/attention.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/custom_decoder.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/helpers.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/modules.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/models/tacotron.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11], 'mod': [1, 2, 3, 4, 5, 6, 7, 8, 9]}, "(None, 'split_func', 14)": {'mod': [14, 15, 16, 17, 18, 19, 20, 21, 24, 25, 26, 28, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 79, 81, 82, 84, 86, 87, 88, 89, 90, 91, 93, 94, 95, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 108, 109, 110, 111, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 132, 134, 135, 136, 137, 139, 140, 141, 142, 143, 145, 147, 148, 151, 153, 154, 155, 156, 157, 158, 160, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 198, 199, 200, 201, 202, 203, 205, 206, 207, 209, 210, 212, 213, 214, 215, 216, 217, 218, 220, 221, 222, 223, 225, 226, 228, 229, 230, 232, 233, 234, 235, 237, 238, 240, 241, 242, 243, 244, 245, 246, 247, 249, 250, 252, 253, 254, 256, 257, 259, 260, 261, 263, 264, 265, 266, 267, 268, 269, 270, 271, 273, 274, 275, 277, 278, 279, 280, 281, 282, 283, 284, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 307, 308, 309, 312, 313, 314, 316, 317, 318, 319, 320, 321, 323, 324, 325, 326, 327, 328, 330, 331, 333, 334, 335, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 369, 370, 371, 373, 374, 375, 376, 377, 378, 379, 380, 381, 382, 383, 385, 386, 387, 388, 389, 390, 391, 392, 394, 395, 396, 397, 398, 399, 400, 402, 403, 404, 405, 406, 407, 409, 410, 412, 413, 414, 415, 416, 417, 418, 420, 421, 422, 423, 424, 425, 427, 428, 429, 430, 431, 432, 433, 435, 436, 437, 439, 441, 442, 443, 444, 445, 446, 447, 448, 449, 451, 452, 454, 455, 456, 457, 458, 459, 460, 461, 462, 464, 465, 466, 467, 468, 469, 470, 471, 472, 473, 474, 475, 476, 477, 479, 480, 481, 483, 484, 485, 486, 487, 488, 489, 491, 492, 493, 494, 495, 497, 498, 499, 501, 502, 504, 505, 507, 508, 509, 510, 512, 513, 514, 515, 516, 517, 518, 520, 521]}}}, {'path': 'synthesizer/preprocess.py', 'status': 'modified', 'Loc': {"(None, 'process_utterance', 185)": {'add': [204]}}}, {'path': 'synthesizer/synthesize.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [82], 'mod': [1, 3, 4, 6, 7]}, "(None, 'run_eval', 10)": {'mod': [10, 11, 12, 14, 15, 16, 17, 18, 20, 21, 23, 24, 25, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37]}, "(None, 'run_synthesis', 39)": {'mod': [40, 41, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 62, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 77, 78, 80, 81]}}}, {'path': 'synthesizer/tacotron2.py', 'status': 'removed', 'Loc': {}}, {'path': 'synthesizer/train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 79, 83], 'mod': [3, 4, 5, 6, 7, 9, 10, 12, 14, 16, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 31, 32, 35, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78]}, "(None, 'model_train_mode', 85)": {'mod': [85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 130, 131, 133, 134, 135, 136, 138, 139, 141, 142, 143, 144, 146, 147, 148, 149]}, "(None, 'train', 110)": {'mod': [151, 152, 153, 154, 155, 156, 157, 159, 161, 167, 169, 171, 172, 173, 174, 176, 177, 178, 179, 181, 183, 184, 185, 186, 187, 189, 190, 191, 192, 194, 195, 196, 198, 199, 201, 202, 204, 205, 207, 208, 210, 212, 213, 214, 215, 216, 218, 219, 220, 222, 223, 224, 226, 227, 228, 230, 231, 232, 233, 234, 235, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 260, 261, 262, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 295, 296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311, 313, 314, 315, 316, 317, 318, 319, 320, 322, 323, 324, 325, 327, 328, 329, 330, 332, 333, 334, 335, 336, 337, 338, 339, 341, 342, 343, 344, 346, 347, 348, 349, 350, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364, 365, 366, 367, 368, 370, 371, 372, 374, 375, 376, 377, 378, 379, 381, 382, 383, 385, 386, 387, 388, 391, 392]}}}, {'path': 'synthesizer/utils/__init__.py', 'status': 'modified', 'Loc': {"('ValueWindow', None, 1)": {'add': [0]}}}, {'path': 'synthesizer_train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 4, 6, 9, 10, 11, 12, 13, 14, 15, 16, 21, 22, 23, 24, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 53, 55]}}}, {'path': 'toolbox/__init__.py', 'status': 'modified', 'Loc': {"('Toolbox', 'init_encoder', 325)": {'add': [333]}, "('Toolbox', None, 42)": {'mod': [43]}, "('Toolbox', '__init__', 43)": {'mod': [54]}, "('Toolbox', 'synthesize', 207)": {'mod': [211, 212, 213, 214, 215, 216, 217, 221, 224, 228]}, "('Toolbox', 'vocode', 237)": {'mod': [243]}}}, {'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [41]}, "('UI', None, 53)": {'mod': [331]}, "('UI', 'populate_models', 338)": {'mod': [347, 348, 349, 350, 351, 352, 353]}}}, {'path': 'vocoder_preprocess.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [32, 40], 'mod': [20]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"synthesizer/models/modules.py",
"synthesizer/models/tacotron.py",
"synthesizer/train.py",
"synthesizer/models/attention.py",
"synthesizer_train.py",
"demo_cli.py",
"toolbox/__init__.py",
"demo_toolbox.py",
"synthesizer/models/architecture_wrappers.py",
"synthesizer/audio.py",
"synthesizer/preprocess.py",
"synthesizer/tacotron2.py",
"synthesizer/hparams.py",
"synthesizer/utils/__init__.py",
"synthesizer/synthesize.py",
"toolbox/ui.py",
"encoder/audio.py",
"synthesizer/feeder.py",
"synthesizer/models/helpers.py",
"synthesizer/models/__init__.py",
"synthesizer/inference.py",
"vocoder_preprocess.py",
"synthesizer/models/custom_decoder.py",
"synthesizer/infolog.py"
],
"doc": [
"synthesizer/LICENSE.txt",
"README.md"
],
"test": [],
"config": [
"requirements_gpu.txt",
"requirements.txt"
],
"asset": []
} | 1 |
OpenInterpreter | open-interpreter | 3c922603c0a7d1ad4113245a3d2bcd23bf4b1619 | https://github.com/OpenInterpreter/open-interpreter/issues/875 | Bug | NameError: name 'computer' is not defined | ### Describe the bug
When I run `interpreter --os`
And then attempt a command like:
`Play a boiler room set on youtube`
I get a `NameError`:
```
▌ OS Control enabled
To find items on the screen, Open Interpreter has been instructed to send screenshots to api.openinterpreter.com (we do not store them). Add --offline to attempt this locally.
Make sure that screen recording permissions are enabled for your Terminal or Python environment.
Warning: In this mode, Open Interpreter will not require approval before performing actions. Be ready to close your terminal.
> Play a boiler room set on youtube
The first step is to open YouTube. We can do this by using a keyboard hotkey to open Spotlight and then typing "Safari" to open the browser. Once Safari is open, we can navigate
to YouTube by writing the URL in the address bar. Let's start by opening Safari.
Expecting value: line 1 column 1 (char 0)
Continuing.
computer.keyboard.hotkey("space", "command") # Opens spotlight
computer.keyboard.write("Safari")
computer.keyboard.press("enter") # To open Safari
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Input In [4], in <cell line: 2>()
1 print('##active_line1##')
----> 2 computer.keyboard.hotkey('space', 'command')
3 print('##active_line2##')
4 computer.keyboard.write('Safari')
NameError: name 'computer' is not defined
It seems there was a mistake in my attempt to run the code. "computer" is not recognized as defined within the current execution environment. Let's correct this by using the
appropriate syntax to execute the code on your machine. We will try to open Safari using the correct approach. Let's run the correct code.
Expecting value: line 1 column 1 (char 0)
Continuing.
# Opens spotlight and types Safari to open the browser
computer.keyboard.hotkey("space", "command")
computer.keyboard.write("Safari")
computer.keyboard.press("enter")
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
Input In [7], in <cell line: 4>()
2 pass
3 print('##active_line2##')
----> 4 computer.keyboard.hotkey('space', 'command')
5 print('##active_line3##')
6 computer.keyboard.write('Safari')
NameError: name 'computer' is not defined
```
And it just gets stuck in this loop where computer is not defined.
### Reproduce
1. `interpreter --os`
2. `Play a boiler room set on youtube`
### Expected behavior
For it to be able to open Safari or my default web browser without a Name Error of computer.
### Screenshots
_No response_
### Open Interpreter version
0.2.0
### Python version
3.9.6
### Operating System name and version
macOS 14.0
### Additional context
I have 2 python versions installed. 3.9.6 and 3.10.8. I installed interpreter on both. | null | https://github.com/OpenInterpreter/open-interpreter/pull/937 | null | {'base_commit': '3c922603c0a7d1ad4113245a3d2bcd23bf4b1619', 'files': [{'path': 'interpreter/core/computer/terminal/terminal.py', 'status': 'modified', 'Loc': {"('Terminal', 'run', 36)": {'mod': [40]}}}, {'path': 'interpreter/terminal_interface/start_terminal_interface.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}, "(None, 'start_terminal_interface', 19)": {'mod': [303, 544, 545, 546, 548, 593, 603, 608, 633]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"interpreter/core/computer/terminal/terminal.py",
"interpreter/terminal_interface/start_terminal_interface.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
oobabooga | text-generation-webui | ad14f0e49929d426560413c0b9de19986cbeac9e | https://github.com/oobabooga/text-generation-webui/issues/461 | bug | SileroTTS creates new audio file for each token | ### Describe the bug
I've just performed a fresh install to confirm this.
Unless i turn on no stream, SileroTTS will attempt to create an audio file for each word / token.
Silero should not attempt to create audio until the response is complete.
Silero extension output directory is being filled up with audio clips that only add one word to the previous file. Is this known to be broken like this?
Turning off stream works, but it means that the text stream doesn't work. Is there a way to turn off streaming for Silero only?
### Is there an existing issue for this?
- [X] I have searched the existing issues
### Reproduction
1. Enable Silero Extension
2. Disable Auto Play
3. Start Chat
### Screenshot
_No response_
### Logs
```shell
N/A
```
### System Info
```shell
Windows 11 / Firefox or Edge
```
| null | https://github.com/oobabooga/text-generation-webui/pull/192 | null | {'base_commit': 'ad14f0e49929d426560413c0b9de19986cbeac9e', 'files': [{'path': 'extensions/silero_tts/script.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 5, 14, 35], 'mod': [10, 18]}, "(None, 'input_modifier', 36)": {'add': [41]}, "(None, 'output_modifier', 44)": {'add': [59, 65, 67], 'mod': [49, 69, 70, 72, 73]}, "(None, 'ui', 86)": {'add': [92, 93], 'mod': [88, 89]}}}, {'path': 'modules/shared.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}}}, {'path': 'modules/text_generation.py', 'status': 'modified', 'Loc': {"(None, 'generate_reply', 88)": {'add': [189, 202, 205, 219, 224], 'mod': [199, 216]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"extensions/silero_tts/script.py",
"modules/shared.py",
"modules/text_generation.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 896256ee02273bebf723428ee41cab31930a69f4 | https://github.com/pandas-dev/pandas/issues/41423 | Docs
good first issue | DOC: pandas.Series(data=None, index=None, dtype=None, name=None, copy=False, fastpath=False) | No proper information on "copy" is present under [Documentation](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html) | null | https://github.com/pandas-dev/pandas/pull/41514 | null | {'base_commit': '896256ee02273bebf723428ee41cab31930a69f4', 'files': [{'path': 'pandas/core/series.py', 'status': 'modified', 'Loc': {"('Series', None, 194)": {'add': [253], 'mod': [226]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/series.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | 917acaa4524e0195c52a636fccf6a0de4eedd37b | https://github.com/deepfakes/faceswap/issues/1170 | docker | CUDA version incorrect in Dockerfile.gpu | The Dockerfile.gpu doesn't work for me. The built doesn't use GPU at all.
I found that tensorflow cannot find shared library file libXXXX.so.11.0 (If I remember correctly, it's libcudart.so.11.0). I realize that the tensorflow version installed needs CUDA 11.0. But the original Dockerfile.gpu installs the CUDA 10.1.
If someone had similar issue, please modify the Dockerfile with:
FROM nvidia/cuda:11.0.3-cudnn8-devel-ubuntu16.04
| null | https://github.com/deepfakes/faceswap/pull/1232 | null | {'base_commit': '917acaa4524e0195c52a636fccf6a0de4eedd37b', 'files': [{'path': 'Dockerfile.gpu', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 22]}}}, {'path': 'INSTALL.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [39, 279, 285], 'mod': [237, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248, 249, 251, 252, 254, 255, 257, 258, 259, 260, 261, 262, 264, 265, 266, 267, 268, 269, 270, 272, 281, 282, 283, 284, 287]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"INSTALL.md"
],
"test": [],
"config": [
"Dockerfile.gpu"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 9908e1b28525fe96394446be95fcb00785d0ca0c | https://github.com/All-Hands-AI/OpenHands/issues/5365 | bug | [Bug]: Editing Error "No replacement was performed" is not informative enough | ### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Describe the bug and reproduction steps
The agent got this error:
```
ERROR:
No replacement was performed. Multiple occurrences of old_str ` output_path = Path.joinpath(self._output_dir, "recipe_state.pt")
torch.save(state_dict, output_path)
logger.info(
"Recipe checkpoint of size "
f"{os.path.getsize(output_path) / 1000**3:.2f} GB "
f"saved to {output_path}"
)` in lines []. Please ensure it is unique.
```
`in lines []. Please ensure it is unique.` does look right? Should we give out the specific line number?
Full trajectory: https://www.all-hands.dev/share?share_id=7c05665906ffb699d93426129b1ee8c50c3cc5c7dcb5e164de9c54f6468e7876
cc @ryanhoangt
### OpenHands Installation
Docker command in README
### OpenHands Version
_No response_
### Operating System
None
### Logs, Errors, Screenshots, and Additional Context
_No response_ | null | https://github.com/All-Hands-AI/OpenHands/pull/5397 | null | {'base_commit': '9908e1b28525fe96394446be95fcb00785d0ca0c', 'files': [{'path': 'openhands/runtime/action_execution_server.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 13]}, "('ActionExecutor', 'run_ipython', 178)": {'add': [201]}}}, {'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 5486, 5491, 5492, 10090]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [67]}}}, {'path': 'tests/unit/test_agent_skill.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [720, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733, 735, 737, 738, 739, 740, 741, 742, 743, 744, 745, 746, 747, 748, 749, 750, 752, 753, 754, 755, 756, 757, 758, 759, 760, 761, 762, 763, 764, 765, 768, 769, 770, 771, 772, 773, 775, 777, 778, 779, 780, 781, 782, 783, 784, 786, 787, 788, 789, 790, 792, 793, 794, 795, 796, 797, 798, 799, 800, 801]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"openhands/runtime/action_execution_server.py"
],
"doc": [],
"test": [
"tests/unit/test_agent_skill.py"
],
"config": [
"poetry.lock",
"pyproject.toml"
],
"asset": []
} | 1 |
pandas-dev | pandas | fa78ea801392f4f0d37ea7ddbbfe44e9c8c102bd | https://github.com/pandas-dev/pandas/issues/49647 | Code Style
good first issue | STYLE place standard library imports at top of file | Imports should typically be placed at the top of files. Sometimes, imports are placed inside functions to:
- avoid circular imports
- avoid `ImportError` if it's an optional dependency
Standard library imports should really always be at the top of files.
Noticed in https://github.com/pandas-dev/pandas/pull/49645 that this is often not the case
I've made a script to automate detecting when this is the case. So the task is:
```
git checkout -b standard-library-imports main
git pull git@github.com:MarcoGorelli/pandas.git standard-library-imports
git reset --hard FETCH_HEAD
pre-commit run stdlib-imports --all-files
```
Then, fixup any errors that are reported. Finally, stage your changes, commit them, push them to your fork, and open a pull request
Feel free to reach out if you into any issues along the way
If any wants to take this, it would be a nice and welcome clean up!
---
EDIT: after going through a PR, I'm not sure it's worth introducing a check for this - but we can still take some of the cleanups it found | null | https://github.com/pandas-dev/pandas/pull/50116 | null | {'base_commit': 'fa78ea801392f4f0d37ea7ddbbfe44e9c8c102bd', 'files': [{'path': 'pandas/tests/apply/test_series_apply.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "(None, 'test_apply', 35)": {'mod': [40]}, "(None, 'test_map_decimal', 527)": {'mod': [528]}}}, {'path': 'pandas/tests/arrays/test_datetimelike.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'array_likes', 1337)": {'mod': [1349, 1350]}}}, {'path': 'pandas/tests/frame/indexing/test_indexing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('TestDataFrameIndexing', 'test_setitem_ambig', 468)": {'mod': [470]}}}, {'path': 'pandas/tests/frame/methods/test_to_records.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, "('TestDataFrameToRecords', 'test_to_records_with_Mapping_type', 60)": {'mod': [61, 62]}}}, {'path': 'pandas/tests/frame/test_constructors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 3, 4, 10]}, "('TestDataFrameConstructors', 'test_constructor_ordereddict', 468)": {'mod': [469]}, "('TestDataFrameConstructors', 'test_constructor_defaultdict', 719)": {'mod': [721]}, "('TestDataFrameConstructors', 'test_constructor_stdlib_array', 1343)": {'mod': [1346]}, "('TestDataFrameConstructors', 'test_constructor_list_of_namedtuples', 1545)": {'mod': [1547]}, "('TestDataFrameConstructors', 'test_constructor_list_of_dataclasses', 1560)": {'mod': [1562]}, "('TestDataFrameConstructors', 'test_constructor_list_of_dataclasses_with_varying_types', 1571)": {'mod': [1573]}, "('TestDataFrameConstructors', 'test_constructor_list_of_dataclasses_error_thrown', 1587)": {'mod': [1589]}}}, {'path': 'pandas/tests/groupby/test_filters.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "(None, 'test_filter_against_workaround', 173)": {'mod': [195]}}}, {'path': 'pandas/tests/groupby/test_grouping.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}, "('TestGrouping', 'test_grouper_multilevel_freq', 169)": {'mod': [173, 174, 175, 176]}}}, {'path': 'pandas/tests/groupby/test_timegrouper.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3]}, "('TestGroupBy', 'test_first_last_max_min_on_time_data', 762)": {'mod': [766, 777]}}}, {'path': 'pandas/tests/indexes/test_common.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('TestCommon', 'test_copy_and_deepcopy', 134)": {'mod': [135, 136, 137, 138]}}}, {'path': 'pandas/tests/indexing/multiindex/test_slice.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('TestMultiIndexSlicers', 'test_multiindex_slicers_datetimelike', 247)": {'mod': [251, 253, 254, 255, 256]}}}, {'path': 'pandas/tests/io/excel/test_readers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "('TestReaders', 'test_read_from_file_url', 890)": {'mod': [900]}}}, {'path': 'pandas/tests/io/formats/test_printing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "(None, 'test_repr_binary_type', 21)": {'mod': [22]}}}, {'path': 'pandas/tests/io/formats/test_to_csv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('TestToCSV', 'test_to_csv_doublequote', 84)": {'mod': [97]}}}, {'path': 'pandas/tests/io/json/test_pandas.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "('TestPandasContainer', 'test_to_s3', 1732)": {'mod': [1733]}}}, {'path': 'pandas/tests/io/parser/test_c_parser_only.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "(None, 'test_precise_conversion', 171)": {'mod': [172]}}}, {'path': 'pandas/tests/io/parser/test_encoding.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}, "(None, 'test_utf16_bom_skiprows', 47)": {'mod': [62]}}}, {'path': 'pandas/tests/io/parser/test_python_parser_only.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12]}, "(None, 'test_sniff_delimiter_encoding', 100)": {'mod': [111]}}}, {'path': 'pandas/tests/io/pytables/test_store.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3], 'mod': [1]}, "(None, 'test_repr', 110)": {'mod': [129, 130]}, "(None, 'test_table_mixed_dtypes', 431)": {'mod': [444, 445]}, "(None, 'test_calendar_roundtrip_issue', 454)": {'mod': [461, 467, 468]}, "(None, 'test_same_name_scoping', 524)": {'mod': [537]}, "(None, 'test_store_index_name_numpy_str', 558)": {'mod': [561, 565]}, "(None, 'do_copy', 878)": {'mod': [880]}}}, {'path': 'pandas/tests/io/test_orc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'test_orc_reader_decimal', 100)": {'mod': [101]}}}, {'path': 'pandas/tests/io/xml/test_xml.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "(None, 'test_empty_string_etree', 493)": {'mod': [494]}, "(None, 'test_wrong_file_path_etree', 513)": {'mod': [514]}}}, {'path': 'pandas/tests/plotting/frame/test_frame.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 9]}, "('TestDataFramePlots', 'test_memory_leak', 1783)": {'mod': [1785, 1786]}}}, {'path': 'pandas/tests/reshape/concat/test_concat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "('TestConcatenate', 'test_dtype_coerceion', 337)": {'mod': [346, 348, 349, 350]}}}, {'path': 'pandas/tests/reshape/concat/test_index.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('TestMultiIndexConcat', 'test_concat_multiindex_dfs_with_deepcopy', 241)": {'mod': [243]}}}, {'path': 'pandas/tests/reshape/test_get_dummies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, "('TestGetDummies', 'test_get_dummies_unicode', 165)": {'mod': [167]}}}, {'path': 'pandas/tests/series/test_arithmetic.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 4]}, "('TestSeriesArithmetic', 'test_add_na_handling', 224)": {'mod': [225, 226]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [
"pandas/tests/arrays/test_datetimelike.py",
"pandas/tests/frame/methods/test_to_records.py",
"pandas/tests/indexes/test_common.py",
"pandas/tests/groupby/test_timegrouper.py",
"pandas/tests/reshape/test_get_dummies.py",
"pandas/tests/groupby/test_grouping.py",
"pandas/tests/reshape/concat/test_concat.py",
"pandas/tests/frame/test_constructors.py",
"pandas/tests/indexing/multiindex/test_slice.py",
"pandas/tests/io/test_orc.py",
"pandas/tests/io/parser/test_encoding.py",
"pandas/tests/plotting/frame/test_frame.py",
"pandas/tests/io/formats/test_printing.py",
"pandas/tests/io/formats/test_to_csv.py",
"pandas/tests/io/json/test_pandas.py",
"pandas/tests/reshape/concat/test_index.py",
"pandas/tests/io/excel/test_readers.py",
"pandas/tests/series/test_arithmetic.py",
"pandas/tests/io/xml/test_xml.py",
"pandas/tests/io/pytables/test_store.py",
"pandas/tests/io/parser/test_python_parser_only.py",
"pandas/tests/groupby/test_filters.py",
"pandas/tests/frame/indexing/test_indexing.py",
"pandas/tests/io/parser/test_c_parser_only.py",
"pandas/tests/apply/test_series_apply.py"
],
"config": [],
"asset": []
} | 1 |
pallets | flask | 024f0d384cf5bb65c76ac59f8ddce464b2dc2ca1 | https://github.com/pallets/flask/issues/3555 | json | Remove simplejson | In modern Python it's unlikely to be significantly better than the built-in `json`. The module used by `JSONMixin` is overridable, so users can plug it in again if they want.
See pallets/itsdangerous#146 and pallets/werkzeug#1766. | null | https://github.com/pallets/flask/pull/3562 | null | {'base_commit': '024f0d384cf5bb65c76ac59f8ddce464b2dc2ca1', 'files': [{'path': 'CHANGES.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}}}, {'path': 'docs/api.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [287, 288, 289, 290, 291, 293, 295, 296, 297, 298, 300, 302, 304, 305, 306, 308, 309, 310, 311, 313, 314, 315, 316, 317, 322, 325, 327, 328, 329, 331, 332]}}}, {'path': 'docs/installation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [42, 43, 44, 51]}}}, {'path': 'src/flask/json/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 3], 'mod': [1, 7, 8, 20, 21, 22, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 38, 39, 40, 41, 44, 45, 46, 47, 48, 49]}, "(None, 'dumps', 179)": {'add': [196], 'mod': [180, 181, 182, 183, 185, 186, 187, 190, 191, 192, 193, 195, 203, 204]}, "(None, 'loads', 217)": {'add': [234], 'mod': [218, 219, 220, 221, 223, 224, 225, 228, 229, 230, 231, 233, 239, 240, 241, 242, 243]}, "(None, 'jsonify', 296)": {'add': [331], 'mod': [297, 298, 299, 300, 301, 302, 304, 305, 307, 308, 309, 310, 311, 312, 314, 318, 320, 321, 322, 324, 335, 336, 338, 339, 340, 341]}, "('JSONEncoder', None, 52)": {'mod': [53, 54, 55, 57, 58, 60, 61]}, "('JSONEncoder', 'default', 64)": {'mod': [65, 66, 67, 69, 70, 72, 73, 74, 75, 76, 77, 78, 79, 91]}, "('JSONDecoder', None, 94)": {'mod': [95, 96, 97, 98]}, "(None, '_dump_arg_defaults', 102)": {'mod': [109, 110, 111, 113, 114]}, "(None, '_load_arg_defaults', 122)": {'mod': [129, 130, 131]}, "(None, 'detect_encoding', 136)": {'mod': [136, 137, 139, 140, 141, 143, 144, 145, 146, 148, 149, 151, 152, 154, 155, 157, 158, 160, 161, 162, 164, 165, 167, 168, 170, 171, 173, 174, 176]}, "(None, 'dump', 208)": {'mod': [209, 212, 213]}, "(None, 'load', 247)": {'mod': [248, 250]}, "(None, 'htmlsafe_dumps', 254)": {'mod': [254, 255, 256, 257, 258, 259, 261, 263, 264, 265, 266, 268, 269, 270, 273, 274, 275, 276, 277, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288]}, "(None, 'htmlsafe_dump', 291)": {'mod': [292, 293]}}}, {'path': 'src/flask/json/tag.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [48]}, "('TagMarkup', None, 169)": {'mod': [170, 172]}, "('TaggedJSONSerializer', None, 215)": {'mod': [225]}}}, {'path': 'tests/test_helpers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [16]}, "('TestJSON', None, 66)": {'mod': [67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85]}, "('TestJSON', 'test_template_escaping', 252)": {'mod': [256]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4, 27]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/flask/json/__init__.py",
"src/flask/json/tag.py"
],
"doc": [
"docs/api.rst",
"docs/installation.rst",
"CHANGES.rst"
],
"test": [
"tests/test_helpers.py"
],
"config": [
"tox.ini"
],
"asset": []
} | 1 |
3b1b | manim | 384895b9a8da0fcdb3b92868fb5965c5e6de1ed5 | https://github.com/3b1b/manim/issues/293 | Outdated DockerFile dependencies | The DockerFile inside the manim-master still contains the python version 2.7.12. Considering that manim had no longer support the python 2. This could lead to a syntax error. Please fix this issue ASAP. | null | https://github.com/3b1b/manim/pull/301 | null | {'base_commit': '384895b9a8da0fcdb3b92868fb5965c5e6de1ed5', 'files': [{'path': 'Dockerfile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 13], 'mod': [1, 2, 3, 4, 6, 9, 10, 11, 12, 15, 16, 18, 19, 20, 22]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"Dockerfile"
],
"asset": []
} | 1 | |
yt-dlp | yt-dlp | 3699eeb67cad333272b14a42dd3843d93fda1a2e | https://github.com/yt-dlp/yt-dlp/issues/9567 | site-bug | [TikTok] New API fix adds non-playable video codec in available formats | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Global
### Provide a description that is worded well enough to be understood
Hi! New API fix adds for some videos a new codec bytevc2 which cannot be played by multimedia players (I used VLC). By default yt_dlp chooses normal codec, but I use `-S res:1080,vcodec:avc1,ext:mp4:m4a` format selection, so yt_dlp downloads video with this new codec.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
user@host:~$ python3 -m yt_dlp -F https://vm.tiktok.com/ZMMPDNEJL/
[vm.tiktok] Extracting URL: https://vm.tiktok.com/ZMMPDNEJL/
[vm.tiktok] ZMMPDNEJL: Downloading webpage
[TikTok] Extracting URL: https://www.tiktok.com/@soyko_max/video/7351538939712359713?_t=8l5sekUiLWo&_r=1
[TikTok] 7351538939712359713: Downloading video feed
[info] Available formats for 7351538939712359713:
ID EXT RESOLUTION │ FILESIZE TBR PROTO │ VCODEC ACODEC MORE INFO
───────────────────────────────────────────────────────────────────────────────────────────────────────────────
download_addr-0 mp4 720x1280 │ 1.31MiB https │ h264 aac Download video, watermarked (API)
download_addr-1 mp4 720x1280 │ 1.31MiB https │ h264 aac Download video, watermarked
download_addr-2 mp4 720x1280 │ 1.31MiB https │ h264 aac Download video, watermarked
h264_540p_986746-0 mp4 1048x576 │ 1.24MiB 986k https │ h264 aac Direct video (API)
h264_540p_986746-1 mp4 1048x576 │ 1.24MiB 986k https │ h264 aac Direct video
h264_540p_986746-2 mp4 1048x576 │ 1.24MiB 986k https │ h264 aac Direct video
bytevc1_540p_263555-0 mp4 1048x576 │ 339.96KiB 263k https │ h265 aac Playback video (API)
bytevc1_540p_263555-1 mp4 1048x576 │ 339.96KiB 263k https │ h265 aac Playback video
bytevc1_540p_263555-2 mp4 1048x576 │ 339.96KiB 263k https │ h265 aac Playback video
bytevc1_540p_344910-0 mp4 1048x576 │ 444.91KiB 344k https │ h265 aac Playback video (API)
bytevc1_540p_344910-1 mp4 1048x576 │ 444.91KiB 344k https │ h265 aac Playback video
bytevc1_540p_344910-2 mp4 1048x576 │ 444.91KiB 344k https │ h265 aac Playback video
bytevc1_540p_507345-0 mp4 1048x576 │ 654.43KiB 507k https │ h265 aac Direct video (API)
bytevc1_540p_507345-1 mp4 1048x576 │ 654.43KiB 507k https │ h265 aac Direct video
bytevc1_540p_507345-2 mp4 1048x576 │ 654.43KiB 507k https │ h265 aac Direct video
bytevc2_720p_616180-0 mp4 1280x704 │ 794.82KiB 616k https │ bytevc2 aac Playback video (API)
bytevc2_720p_616180-1 mp4 1280x704 │ 794.82KiB 616k https │ bytevc2 aac Playback video
bytevc2_720p_616180-2 mp4 1280x704 │ 794.82KiB 616k https │ bytevc2 aac Playback video
bytevc1_720p_595186-0 mp4 1280x704 │ 767.74KiB 595k https │ h265 aac Playback video (API)
bytevc1_720p_595186-1 mp4 1280x704 │ 767.74KiB 595k https │ h265 aac Playback video
bytevc1_720p_595186-2 mp4 1280x704 │ 767.74KiB 595k https │ h265 aac Playback video
user@host:~$ python3 -m yt_dlp -vU -S res:1080,vcodec:avc1,ext:mp4:m4a https://vm.tiktok.com/ZMMPDNEJL/
[debug] Command-line config: ['-vU', '-S', 'res:1080,vcodec:avc1,ext:mp4:m4a', 'https://vm.tiktok.com/ZMMPDNEJL/']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447]
[debug] Lazy loading extractors is disabled
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-5.15.0-101-generic-x86_64-with-glibc2.35 (OpenSSL 3.0.2 15 Mar 2022, glibc 2.35)
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.31.0, secretstorage-3.3.1, sqlite3-3.37.2, urllib3-2.2.1, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1807 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.03.10 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)
[vm.tiktok] Extracting URL: https://vm.tiktok.com/ZMMPDNEJL/
[vm.tiktok] ZMMPDNEJL: Downloading webpage
[TikTok] Extracting URL: https://www.tiktok.com/@soyko_max/video/7351538939712359713?_t=8l5sekUiLWo&_r=1
[debug] [TikTok] iid=7351149742343391009
[TikTok] 7351538939712359713: Downloading video feed
[debug] Sort order given by user: res:1080, vcodec:avc1, ext:mp4:m4a
[debug] Sort order given by extractor: quality, codec, size, br
[debug] Formats sorted by: hasvid, ie_pref, res:1080(1080.0), vcodec:avc1(7), vext:mp4(6), aext:m4a(8), quality, acodec, size, br, lang, fps, hdr:12(7), channels, asr, proto, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 7351538939712359713: Downloading 1 format(s): bytevc2_720p_616180-2
[debug] Invoking http downloader on "https://v16m.byteicdn.com/366e989edb75b43b4e545b0f6f94180e/6607a32c/video/tos/useast2a/tos-useast2a-ve-0068-euttp/oIEDMCMe2rmeIExnh6mGk70AEkerG1aIRgfFj6/?a=0&bti=OHYpOTY0Zik3OjlmOm01MzE6ZDQ0MDo%3D&ch=0&cr=13&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1202&bt=601&cs=5&ds=3&ft=teSL~8QLodzR12NvvEh3hIxR34DaRq_45SY&mime_type=video_mp4&qs=14&rc=aTwzOGdkOGRnMzw1NDQ1OUBpam9pbng5cjx4cjMzZjczM0AvL18yYmBgXjMxXy4tLy5eYSNsa2FnMmRrZy9gLS1kMWNzcw%3D%3D&vvpl=1&l=20240329232904ECF8D33243F1CE142F0B&btag=e00088000&cc=10"
[download] Destination: Оригинал) [7351538939712359713].mp4
[download] 100% of 794.82KiB in 00:00:00 at 7.49MiB/s
```
| null | https://github.com/yt-dlp/yt-dlp/pull/9575 | null | {'base_commit': '3699eeb67cad333272b14a42dd3843d93fda1a2e', 'files': [{'path': 'yt_dlp/extractor/tiktok.py', 'status': 'modified', 'Loc': {"('TikTokBaseIE', 'extract_addr', 275)": {'add': [276, 288], 'mod': [290]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"yt_dlp/extractor/tiktok.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
xtekky | gpt4free | 0d8e4ffa2c0706b0381f53c3985d04255b7170f5 | https://github.com/xtekky/gpt4free/issues/2173 | bug
stale | Disable g4f logging completely | **Bug description**
In my script I have my customized logging, but whenever I use it it prints 2 times (one from my logger, one from g4f logger).
How can I turn off the logger inside the library? Already tried a bunch of stuff with no results.
P.S. Are you using the root logger maybe? If that is the case, please use it with the module name
ex.
1. Create a new logger in new class
2. Set logging level to DEBUG
3. Log something
4. Enjoy duplicated output
**Screenshots**

**Environment**
- python version: 3.11
- location ( are you in a cloudfare flagged country ) ? nope
| null | https://github.com/xtekky/gpt4free/pull/2347 | null | {'base_commit': '0d8e4ffa2c0706b0381f53c3985d04255b7170f5', 'files': [{'path': 'g4f/Provider/Ai4Chat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}, "('Ai4Chat', 'create_async_generator', 37)": {'mod': [87]}}}, {'path': 'g4f/Provider/Mhystical.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20]}, "('Mhystical', 'filter_response', 81)": {'mod': [87, 88]}}}, {'path': 'g4f/Provider/you/har_file.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14]}, "(None, 'get_telemetry_ids', 79)": {'mod': [84, 91, 115]}}}, {'path': 'g4f/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 14]}}}, {'path': 'g4f/api/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [24]}, "('Api', 'streaming', 196)": {'mod': [203]}, "('Api', 'chat_completions', 166)": {'mod': [210]}, "('Api', 'generate_image', 214)": {'mod': [225]}}}, {'path': 'g4f/api/_logging.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}, "('__InterceptHandler', None, 12)": {'mod': [12, 13, 14, 15, 16, 17, 19, 20, 21, 22, 24, 25, 26]}, "(None, 'hook_logging', 31)": {'mod': [31, 32]}}}, {'path': 'g4f/gui/server/api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21]}, "('Api', '_create_response_stream', 138)": {'mod': [158, 168]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [11]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [42]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"g4f/Provider/Ai4Chat.py",
"g4f/Provider/Mhystical.py",
"g4f/Provider/you/har_file.py",
"setup.py",
"g4f/api/__init__.py",
"g4f/__init__.py",
"g4f/gui/server/api.py",
"g4f/api/_logging.py"
],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | 1 |
pandas-dev | pandas | 324208eaa66a528f1e88f938c71c2d8efb8304f3 | https://github.com/pandas-dev/pandas/issues/5420 | Bug
Docs
Indexing | BUG: loc should not fallback for integer indexing for multi-index | https://groups.google.com/forum/m/#!topic/pydata/W0e3l0UvNwI
| null | https://github.com/pandas-dev/pandas/pull/7497 | null | {'base_commit': '324208eaa66a528f1e88f938c71c2d8efb8304f3', 'files': [{'path': 'doc/source/v0.14.1.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [64]}}}, {'path': 'pandas/core/index.py', 'status': 'modified', 'Loc': {"('Index', '_convert_list_indexer_for_mixed', 607)": {'mod': [612]}}}, {'path': 'pandas/tests/test_indexing.py', 'status': 'modified', 'Loc': {"('TestIndexing', None, 86)": {'add': [808]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/index.py"
],
"doc": [
"doc/source/v0.14.1.txt"
],
"test": [
"pandas/tests/test_indexing.py"
],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 6d2c57fa010c12f21f700034b5651519670b9b9d | https://github.com/pandas-dev/pandas/issues/3561 | Bug
Indexing | DataFrame.ix losing row ordering when index has duplicates | ``` python
import pandas as pd
ind = ['A', 'A', 'B', 'C']i
df = pd.DataFrame({'test':range(len(ind))}, index=ind)
rows = ['C', 'B']
res = df.ix[rows]
assert rows == list(res.index) # fails
```
The problem is that the resulting DataFrame keeps the ordering of the `df.index` and not the `rows` key. You'll notice that the `rows` key doesn't reference a duplicate value.
| null | https://github.com/pandas-dev/pandas/pull/3563 | null | {'base_commit': '6d2c57fa010c12f21f700034b5651519670b9b9d', 'files': [{'path': 'RELEASE.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [93, 150]}}}, {'path': 'doc/source/indexing.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1370]}}}, {'path': 'pandas/core/index.py', 'status': 'modified', 'Loc': {"('Index', None, 50)": {'add': [861]}}}, {'path': 'pandas/core/indexing.py', 'status': 'modified', 'Loc': {"('_NDFrameIndexer', '_getitem_iterable', 412)": {'mod': [461, 462]}, "('_NDFrameIndexer', '_convert_to_indexer', 464)": {'mod': [572, 573, 574, 575, 576, 577, 578, 579, 581, 582, 584]}}}, {'path': 'pandas/index.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [269, 270, 271]}}}, {'path': 'pandas/lib.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [418]}}}, {'path': 'pandas/tests/test_frame.py', 'status': 'modified', 'Loc': {"('TestDataFrame', '_check_df', 4667)": {'mod': [4671, 4672]}}}, {'path': 'pandas/tests/test_indexing.py', 'status': 'modified', 'Loc': {"('TestIndexing', None, 85)": {'add': [786]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/index.pyx",
"pandas/core/index.py",
"pandas/core/indexing.py",
"pandas/lib.pyx"
],
"doc": [
"doc/source/indexing.rst",
"RELEASE.rst"
],
"test": [
"pandas/tests/test_indexing.py",
"pandas/tests/test_frame.py"
],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | ce8a11a62f8a126ed54dd0ede51cf2c196ed310d | https://github.com/All-Hands-AI/OpenHands/issues/2977 | good first issue
frontend
severity:low
small effort | Rename and/or properly document the two different `changeAgentState` functions | There are two `changeAgentState` functions that should probably be renamed and properly documented to avoid confusion for the future.
https://github.com/OpenDevin/OpenDevin/blob/01ce1e35b5b40e57d96b15a7fc9bee4eb8f6966d/frontend/src/state/agentSlice.tsx#L10-L12
https://github.com/OpenDevin/OpenDevin/blob/01ce1e35b5b40e57d96b15a7fc9bee4eb8f6966d/frontend/src/services/agentStateService.ts#L7-L18 | null | https://github.com/All-Hands-AI/OpenHands/pull/3050 | null | {'base_commit': 'ce8a11a62f8a126ed54dd0ede51cf2c196ed310d', 'files': [{'path': 'frontend/src/services/observations.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}, "(None, 'handleObservationMessage', 10)": {'mod': [28]}}}, {'path': 'frontend/src/state/agentSlice.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [10, 16]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"frontend/src/state/agentSlice.tsx",
"frontend/src/services/observations.ts"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | 9438672b1cf80602fc93536670d9601d655377f5 | https://github.com/deepfakes/faceswap/issues/224 | feature | Align rotation of input faces for GAN conversions | Currently, the extractor finds a rotation matrix for each face using umeyama so it can generate a faceset with all the faces mostly upright. Unfortunately this rotation matrix isn't stored in the alignments file, only the bbox (of the un-rotated face) and facial alignments. For the GAN model, when it comes time to convert, the faces aren't rotated upright before being fed through the model so I doubt anyone has been able to get good results for faces that aren't completely upright.
I propose we store the rotation matrix in the alignments file during extract, then at conversion, re-apply it to the cropped face to make it upright before feeding through the model. The swapped output face then needs to be rotated in the inverse direction to match it with the frame again. Hopefully this is possible. | null | https://github.com/deepfakes/faceswap/pull/217 | null | {'base_commit': '9438672b1cf80602fc93536670d9601d655377f5', 'files': [{'path': 'faceswap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8]}}}, {'path': 'lib/ModelAE.py', 'status': 'removed', 'Loc': {}}, {'path': 'lib/cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 17]}}}, {'path': 'lib/training_data.py', 'status': 'modified', 'Loc': {"('TrainingDataGenerator', '__init__', 9)": {'add': [11]}, "('TrainingDataGenerator', None, 8)": {'mod': [9, 64]}, "('TrainingDataGenerator', 'read_image', 37)": {'mod': [45]}, "('TrainingDataGenerator', 'random_warp', 64)": {'mod': [70, 71, 73, 74, 78, 79, 82]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}, "('FullHelpArgumentParser', None, 18)": {'mod': [18, 19, 20, 21, 22, 23, 24, 25, 26]}}}, {'path': 'plugins/Convert_Adjust.py', 'status': 'modified', 'Loc': {"('Convert', None, 8)": {'mod': [15]}, "('Convert', 'patch_image', 15)": {'mod': [22]}}}, {'path': 'plugins/Convert_GAN.py', 'status': 'removed', 'Loc': {}}, {'path': 'plugins/Convert_Masked.py', 'status': 'modified', 'Loc': {"('Convert', '__init__', 9)": {'add': [10, 19]}, "('Convert', None, 8)": {'add': [62], 'mod': [9, 22, 23]}, "('Convert', 'get_new_face', 63)": {'mod': [66, 68]}}}, {'path': 'plugins/Extract_Align.py', 'status': 'renamed', 'Loc': {"('Extract', None, 7)": {'mod': [7]}}}, {'path': 'plugins/Model_GAN/Model.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 17]}, "('GANModel', None, 18)": {'add': [23]}, "('GANModel', 'Decoder_ps', 112)": {'add': [121], 'mod': [124]}, "('GANModel', '__init__', 24)": {'mod': [32, 33, 34, 36, 37, 41, 42, 44, 45, 46, 48, 49, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64]}, "('GANModel', 'conv_block', 71)": {'mod': [73, 74, 75]}, "('GANModel', 'res_block', 78)": {'mod': [80, 81, 83, 84]}, "('GANModel', 'build_generator', 70)": {'mod': [89, 98, 99, 100, 101, 112, 113, 114, 115, 116, 126, 127, 128, 139]}, "('GANModel', 'block', 90)": {'mod': [91, 92]}, "('GANModel', 'conv_block_d', 142)": {'mod': [144, 145, 147, 148, 149]}, "('GANModel', 'Discriminator', 148)": {'mod': [153, 154, 155]}, "('GANModel', 'build_discriminator', 141)": {'mod': [157, 158]}, "('GANModel', 'save_weights', 174)": {'mod': [176, 177, 178, 179]}}}, {'path': 'plugins/Model_GAN/Trainer.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "('Trainer', '__init__', 22)": {'add': [26, 28], 'mod': [30]}, "('Trainer', None, 14)": {'add': [33, 95], 'mod': [17, 22]}, "('Trainer', 'showG', 101)": {'add': [115, 141], 'mod': [118, 119, 123, 124, 127, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 140, 144, 145, 149, 150, 153]}, "('GANTrainingDataGenerator', None, 7)": {'mod': [8, 9]}, "('Trainer', 'train_one_step', 34)": {'mod': [40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 53, 54, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 81, 82, 84, 85, 86, 87, 90]}, "('Trainer', 'show_sample', 96)": {'mod': [99, 101, 102, 103, 104, 105, 106, 107, 108, 109, 111, 112, 113, 114]}}}, {'path': 'plugins/Model_LowMem.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}, "('Model', None, 15)": {'mod': [15]}, "('Trainer', None, 67)": {'mod': [67, 68]}}}, {'path': 'plugins/Model_Original.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'mod': [9]}, "('Model', None, 15)": {'mod': [15]}, "('Model', 'Decoder', 58)": {'mod': [65, 67, 68]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 2, 3, 5, 6, 8, 9, 11, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24, 25, 26, 27, 28, 29, 30, 32, 33, 34, 35, 36, 38, 39, 40, 41, 42, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 56, 57, 58, 59, 60, 61, 63, 64, 65, 66, 67, 68, 70, 71, 72, 73, 74, 75, 77, 78, 79, 80, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 96, 97, 98, 99, 100, 101, 103, 104, 105, 106, 107, 109, 110, 111, 112, 113, 114, 116, 117, 118, 119, 120, 121, 123, 124, 125, 126, 128, 129, 130, 131, 133, 134, 135, 136, 137, 138, 139, 140, 142, 144, 145, 147, 148, 149, 150, 151, 153, 154, 156, 157, 159, 160, 162, 163, 164, 165, 166, 167, 169, 170, 171, 173, 174, 175, 177, 178, 179, 181, 182, 183, 184, 186, 187, 188, 189, 190, 192, 193, 194, 195, 196, 197, 198, 199, 200]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {"('ExtractTrainingData', 'handleImage', 75)": {'mod': [90]}}}, {'path': 'scripts/train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 2, 3, 5, 6, 7, 8, 10, 11, 13, 14, 15, 17, 18, 19, 20, 21, 23, 25, 26, 27, 28, 29, 30, 31, 32, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 94, 95, 96, 98, 99, 100, 101, 103, 104, 106, 107, 108, 109, 110, 111, 112, 114, 115, 116, 117, 118, 119, 120, 121, 122, 123, 125, 126, 127, 129, 130, 131, 133, 134, 135, 136, 137, 138, 140, 141, 142, 143, 145, 146, 148, 150, 152, 154, 155, 157, 158, 159, 161, 162, 163, 165, 166, 167, 168, 169, 170, 171, 172, 173, 175, 176, 177, 178, 179, 180, 181, 183, 185, 186, 187, 188, 189, 190, 191, 192, 193, 194]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/training_data.py",
"plugins/Convert_Adjust.py",
"plugins/Convert_GAN.py",
"plugins/Extract_Align.py",
"plugins/Model_GAN/Model.py",
"plugins/Model_LowMem.py",
"scripts/train.py",
"faceswap.py",
"plugins/Model_Original.py",
"plugins/Convert_Masked.py",
"plugins/Model_GAN/Trainer.py",
"lib/utils.py",
"lib/ModelAE.py",
"lib/cli.py",
"scripts/convert.py",
"scripts/extract.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 2bf09b8a2026b79b11d178d391327035dde9f948 | https://github.com/scrapy/scrapy/issues/710 | item_dropped signal should pass response arg as item_scraped does | I highly use request and response.meta in item_scraped signal handler.
Why item_dropped doesn't pass response argument as well as item_scraper does?
| null | https://github.com/scrapy/scrapy/pull/724 | null | {'base_commit': '2bf09b8a2026b79b11d178d391327035dde9f948', 'files': [{'path': 'docs/topics/signals.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [98], 'mod': [86]}}}, {'path': 'scrapy/core/scraper.py', 'status': 'modified', 'Loc': {"('Scraper', '_itemproc_finished', 198)": {'mod': [208]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/core/scraper.py"
],
"doc": [
"docs/topics/signals.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
comfyanonymous | ComfyUI | f81dbe26e2e363c28ad043db67b59c11bb33f446 | https://github.com/comfyanonymous/ComfyUI/issues/2851 | Differential Diffusion: Giving Each Pixel Its Strength | Hello,
I would like to suggest implementing my paper: Differential Diffusion: Giving Each Pixel Its Strength.
The paper allows a user to edit a picture by a change map that describes how much each region should change.
The editing process is typically guided by textual instructions, although it can also be applied without guidance.
We support both continuous and discrete editing.
Our framework is training and fine tuning free! And has negligible penalty of the inference time.
Our implementation is diffusers-based.
We already tested it on 4 different diffusion models (Kadinsky, DeepFloyd IF, SD, SD XL).
We are confident that the framework can also be ported to other diffusion models, such as SD Turbo, Stable Cascade, and amused.
I notice that you usually stick to white==change convention, which is opposite to the convention we used in the paper.
The paper can be thought of as a generalization to some of the existing techniques.
A black map is just regular txt2img ("0"),
A map of one color (which isn't black) can be thought as img2img,
A map of two colors which one color is white can be thought as inpaint.
And the rest? It's completely new!
In the paper, we suggest some further applications such as soft inpainting and strength visualization.
Site:
https://differential-diffusion.github.io/
Paper:
https://differential-diffusion.github.io/paper.pdf
Repo:
https://github.com/exx8/differential-diffusion | null | https://github.com/comfyanonymous/ComfyUI/pull/2876 | null | {'base_commit': 'f81dbe26e2e363c28ad043db67b59c11bb33f446', 'files': [{'path': 'comfy/samplers.py', 'status': 'modified', 'Loc': {"('KSamplerX0Inpaint', 'forward', 277)": {'add': [278]}}}, {'path': 'nodes.py', 'status': 'modified', 'Loc': {"(None, 'init_custom_nodes', 1936)": {'add': [1963]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"nodes.py",
"comfy/samplers.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
pandas-dev | pandas | bcc5160b3a5b0fc9c531da194c6bb83619045434 | https://github.com/pandas-dev/pandas/issues/18734 | good first issue
Needs Tests | ddof for np.std in df.agg changes depending on how given & lambda expression does not work correctly in a list of functions | #### Code Sample, a copy-pastable example if possible
```python
In [31]: import numpy as np
In [32]: import pandas as pd
In [33]: df = pd.DataFrame(np.arange(6).reshape(3, 2), columns=['A', 'B'])
In [34]: df
Out[34]:
A B
0 0 1
1 2 3
2 4 5
In [35]: df.agg(np.std) # Behavior of ddof=0
Out[35]:
A 1.632993
B 1.632993
dtype: float64
In [36]: df.agg([np.std]) # Behavior of ddof=1
Out[36]:
A B
std 2.0 2.0
In [37]: # So how to get the ddof=0 behavior when giving a list of functions?
In [39]: df.agg([lambda x: np.std(x)]) # This gives a numerically unexpected result.
Out[39]:
A B
<lambda> <lambda>
0 0.0 0.0
1 0.0 0.0
2 0.0 0.0
In [40]: df.agg([np.mean, lambda x: np.std(x)]) # This gives an error.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-40-52f4ec4195b5> in <module>()
----> 1 df.agg([np.mean, lambda x: np.std(x)])
/Users/ikeda/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/pandas/core/frame.py in aggregate(self, func, axis, *args, **kwargs)
4740 if axis == 0:
4741 try:
-> 4742 result, how = self._aggregate(func, axis=0, *args, **kwargs)
4743 except TypeError:
4744 pass
/Users/ikeda/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/pandas/core/base.py in _aggregate(self, arg, *args, **kwargs)
537 return self._aggregate_multiple_funcs(arg,
538 _level=_level,
--> 539 _axis=_axis), None
540 else:
541 result = None
/Users/ikeda/.pyenv/versions/anaconda3-4.4.0/lib/python3.6/site-packages/pandas/core/base.py in _aggregate_multiple_funcs(self, arg, _level, _axis)
594 # if we are empty
595 if not len(results):
--> 596 raise ValueError("no results")
597
598 try:
ValueError: no results
```
#### Problem description
When using, e.g., `df.agg`, the `ddof` (degrees of freedom) value for the function `np.std` changes depending on how the function is given (single function or a list of functions), which may be so confusing for many people. I believe the behavior should be unified in some way.
Furthermore, I could not find the way to obtain to the `np.std` result with `ddof=0` by supplying it as one of the members of a list of functions. The `lambda` expression does not work well in a list of functions (this gives numerically unexpected results or even gives errors). This prohibits us to use many useful methods like `df.agg`, `df.apply`, and `df.describe` when we hope the `ddof=0` behavior.
From https://github.com/pandas-dev/pandas/issues/13344, I guess Developers prefer the `ddof=1` behavior in pandas. So the expected behavior should be as below.
#### Expected Output
```
In [35]: df.agg(np.std) # Behavior of ddof=1
Out[35]:
A 2.0
B 2.0
dtype: float64
In [38]: df.agg([lambda x: np.std(x)]) # To obtain the ddof=0 results
Out[38]:
A B
<lambda> 1.632993 1.632993
In [41]: df.agg([np.mean, lambda x: np.std(x)])
A B
mean 2.0 3.0
<lambda> 1.632993 1.632993
```
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit: None
pandas: 0.21.0
pytest: 3.0.7
pip: 9.0.1
setuptools: 27.2.0
Cython: 0.25.2
numpy: 1.13.3
scipy: 0.19.0
pyarrow: None
xarray: None
IPython: 5.3.0
sphinx: 1.5.6
patsy: 0.4.1
dateutil: 2.6.1
pytz: 2017.3
blosc: None
bottleneck: 1.2.1
tables: 3.3.0
numexpr: 2.6.2
feather: None
matplotlib: 2.0.2
openpyxl: 2.4.7
xlrd: 1.0.0
xlwt: 1.2.0
xlsxwriter: 0.9.6
lxml: 3.7.3
bs4: 4.6.0
html5lib: 0.999
sqlalchemy: 1.1.9
pymysql: None
psycopg2: None
jinja2: 2.9.6
s3fs: None
fastparquet: None
pandas_gbq: None
pandas_datareader: None
</details>
| null | https://github.com/pandas-dev/pandas/pull/52371 | null | {'base_commit': 'bcc5160b3a5b0fc9c531da194c6bb83619045434', 'files': [{'path': 'pandas/tests/apply/test_frame_apply.py', 'status': 'modified', 'Loc': {"(None, 'test_agg_list_like_func_with_args', 1648)": {'add': [1667]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [
"pandas/tests/apply/test_frame_apply.py"
],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | c13703c8dfb7324a05a82e8befe9b203a6590257 | https://github.com/scikit-learn/scikit-learn/issues/29742 | Bug
Sprint | spin docs --no-plot runs the examples | Seen at the EuroScipy sprint
Commands run by spin:
```
$ export SPHINXOPTS=-W -D plot_gallery=0 -j auto
$ cd doc
$ make html
```
Looks like our Makefile does not use SPHINXOPTS the same way as expected:
Probably we have a slightly different way of building the doc
```
❯ make html-noplot -n
sphinx-build -D plot_gallery=0 -b html -d _build/doctrees -T . -jauto \
_build/html/stable
echo
echo "Build finished. The HTML pages are in _build/html/stable."
``` | null | https://github.com/scikit-learn/scikit-learn/pull/29744 | null | {'base_commit': 'c13703c8dfb7324a05a82e8befe9b203a6590257', 'files': [{'path': 'doc/Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [68], 'mod': [5]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"doc/Makefile"
],
"test": [],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | 147c8166852db64de12b851b8307f44c9e8fe0dd | https://github.com/huggingface/transformers/issues/15640 | Add support for ONNX-TensorRT conversion for GPT-J6B (and possible bug in rotary embedding) | ### Who can help
@patil-suraj
## Information
Model I am using: GPT-J
The problem arises when using:
* [x] the official example scripts: (give details below)
* [x] my own modified scripts: (give details below)
## Description
I opened this issue for two reasons:
1. This is not strictly a bug report, rather a change that enables converting this model to ONNX and then parsing it using the current TensorRT ONNX parser.
2. Possible implementation bug in GPT-J.
## Details
1. When exporting GPT-J to ONNX using the latest version (v4.16.2), one of the ops that is exported is [SplitToSequence](https://github.com/onnx/onnx/blob/main/docs/Operators.md#SplitToSequence) (along with more Sequence* ops) that is currently not supported in the [TensorRT ONNX parser](https://github.com/onnx/onnx-tensorrt/blob/master/docs/operators.md).
This is entirely due to just 1 line of code that uses `torch.repeat_interleave`. ([relevant line](https://github.com/huggingface/transformers/blob/52d2e6f6e904ef9b75c78716ce77b98196ed837a/src/transformers/models/gptj/modeling_gptj.py#L67))
```
sin, cos = map(lambda t: t[None, offset : x.shape[1] + offset, None, :].repeat_interleave(2, 3), sincos)
```
By replacing `lambda t` with this:
```
lambda t: t.view(-1, 1).repeat(1, 2).view(seq_len, -1)[None, offset : x.shape[1] + offset, None, :]
```
we get the exact same output tensors but now exporting to ONNX doesn't include any Sequence* ops, and TensorRT can parse it successfully.
The suggested function is even faster, although probably not critical in this huge model (benched only on CPU):
```
original: 106 µs ± 20.9 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
suggested: 32.4 µs ± 6.55 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
```
2. I was following the implementation in EleutherAI for rotary positional embeddings and I'm trying to understand if this is a bug or I'm simply missing something (would love an explanation if you can spare the time) but there (EleutherAI) they implement this function (rotary positional embedding) using `torch.cat` instead of `torch.repeat_interleave`, as can be seen [here](https://github.com/EleutherAI/gpt-neox/blob/b30afd1d0a1d06220be9b5f2c9c9c1523defba96/megatron/model/positional_embeddings.py#L41).
If I'm not missing something, the EleutherAI version transforms a tensor from
```
[[1,2,3],
[4,5,6]]
```
to
```
[[1,2,3,1,2,3],
[4,5,6,4,5,6]]
```
and HF version (using repeat_interleave):
```
[[1,2,3],
[4,5,6]]
```
to
```
[[1,1,2,2,3,3],
[4,4,5,5,6,6]]
```
Can anyone confirm the current implementation is indeed correct? Because otherwise `cat` and `repeat_interleave` are very different, and the rest of the implementation doesn't take it into account. | null | https://github.com/huggingface/transformers/pull/16492 | null | {'base_commit': '147c8166852db64de12b851b8307f44c9e8fe0dd', 'files': [{'path': 'src/transformers/models/gptj/modeling_gptj.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [64]}, "(None, 'apply_rotary_pos_emb', 65)": {'mod': [66]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/models/gptj/modeling_gptj.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | 5007d8e996cbe6c23dcf2b5792775d8fde104128 | https://github.com/deepfakes/faceswap/issues/252 | added image sort tool to faceswap | I added image sort tool to faceswap, which very useful to extract one face from various faces
Example original aligned folder:

Sort it by similarity:
`python.exe faceswap\sorttool.py -i %WORKSPACE%\data_src\aligned -by similarity`
result:

easy delete faces which you dont need:

Sort by blur:
`python.exe faceswap\sorttool.py -i %WORKSPACE%\data_src\aligned -by blur`
most sharp 00000.png:

most blurred 00140.png:

| null | https://github.com/deepfakes/faceswap/pull/255 | null | {'base_commit': '5007d8e996cbe6c23dcf2b5792775d8fde104128', 'files': [{'path': 'plugins/PluginLoader.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('PluginLoader', '_import', 20)": {'add': [23]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {"('ConvertImage', 'add_optional_arguments', 24)": {'mod': [43, 44]}}}, {'path': 'scripts/train.py', 'status': 'modified', 'Loc': {"('TrainingProcessor', 'parse_arguments', 25)": {'mod': [75, 76]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"scripts/train.py",
"scripts/convert.py",
"plugins/PluginLoader.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
xtekky | gpt4free | b615a95a417d8a857b1f822bd2d2f993737d532a | https://github.com/xtekky/gpt4free/issues/1347 | bug | Bing stopped working | **Bug description**
Yesterday, Bing still worked, but today brings up only:
```
Using Bing provider
0, message='Attempt to decode JSON with unexpected mimetype: text/html; charset=utf-8', url=URL('https://www.bing.com/turing/conversation/create?bundleVersion=1.1381.8')
127.0.0.1 - - [14/Dec/2023 20:22:32] "POST /backend-api/v2/conversation HTTP/1.1" 200 -
```
**Screenshots**

**Environment**
- python version: 3.12
- location ( are you in a cloudfare flagged country ) : Ukraine
| null | https://github.com/xtekky/gpt4free/pull/1356 | null | {'base_commit': 'b615a95a417d8a857b1f822bd2d2f993737d532a', 'files': [{'path': 'g4f/Provider/Bing.py', 'status': 'modified', 'Loc': {"(None, 'stream_generate', 432)": {'mod': [442, 443]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"g4f/Provider/Bing.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
psf | requests | 95161ed313db11296c3bd473336340dbb19bb347 | https://github.com/psf/requests/issues/1995 | Planned
Contributor Friendly | Create an Extra for Better SSL Support | So right now the SSL connections when you use pyOpenSSL, ndg-httspclient, and pyasn1 are more secure than if you just use the stdlib options. However it's hard to actually remember those three things. It would be cool if requests would add an extra to it's setup.py so that people can install requests with betterssl, something like:
``` python
setup(
extras_require={
"betterssl": ["pyOpenSSL", "ndg-httpsclient", "pyasn1"],
},
)
```
Would make it so people can install requests like `pip install requests[betterssl]` and get all of those dependencies without having to manually track those down. It also means people could depend on `requests[betterssl]` instead of just `requests` in their own setup.py's.
Extra name can of course be bikeshed here :)
| null | https://github.com/psf/requests/pull/2195 | null | {'base_commit': '95161ed313db11296c3bd473336340dbb19bb347', 'files': [{'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [62]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"setup.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AUTOMATIC1111 | stable-diffusion-webui | 458eda13211ac3498485f1e5154d90808fbcfb60 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12104 | bug | [Bug]: Generating using LoRA fails with Runtime Error with `Lora/Networks: use old method` enabled | ### Is there an existing issue for this?
- [x] I have searched the existing issues and checked the recent builds/commits
### What happened?
I'm on commit 68f336bd994bed5442ad95bad6b6ad5564a5409a, master HEAD at time of posting.
None of my LORAs seem to be working anymore. Normal prompting works fine, but as soon as I try generating after adding a LORA to my prompt I receive the following:
`RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x128)`
I'm not well versed in python nor the inner workings of stable diffusion, so I can't debug this myself effectively.
I don't think providing the LORA file or prompt is necessary, as I can reproduce this with any combination of checkpoints and LORAs which would previously work fine.
### Steps to reproduce the problem
1. Select any SD checkpoint
2. txt2image tab
3. Any combination of prompt and negative prompt, doesnt seem to matter
4. Add a LORA to the prompt, no need to even add the activation token.
5. Any generation settings (for my tests I'm using Euler a, 20 steps, 512x512, CFG 7, no scripts, no hires. fix, no face restore).
6. Generate
### What should have happened?
I would expect the LORA to perform as it did in earlier versions with the same configuration, at the very least, generate an image. I haven't done a bisect, but I tried a commit from a week ago or so and it worked fine there. Every time I pull I delete venv and repositories folders beforehand.
### Version or Commit where the problem happens
68f336bd994bed5442ad95bad6b6ad5564a5409a
### What Python version are you running on ?
Python 3.10.x
### What platforms do you use to access the UI ?
Windows
### What device are you running WebUI on?
Nvidia GPUs (RTX 20 above)
### Cross attention optimization
xformers
### What browsers do you use to access the UI ?
Google Chrome
### Command Line Arguments
```Shell
--xformers --reinstall-xformers --precision full --no-half --skip-torch-cuda-test --opt-split-attention
```
### List of extensions
ddetailer, sd-webui-supermerger, stable-diffusion-webui-dataset-tag-editor, stable-diffusion-webui-wd14-tagger
### Console logs
```Shell
venv "C:\sd\sdwebui\webui\venv\Scripts\Python.exe"
Python 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)]
Version: v1.5.1
Commit hash: 68f336bd994bed5442ad95bad6b6ad5564a5409a
Installing xformers
Collecting xformers==0.0.20
Using cached xformers-0.0.20-cp310-cp310-win_amd64.whl (97.6 MB)
Installing collected packages: xformers
Successfully installed xformers-0.0.20
[notice] A new release of pip available: 22.3.1 -> 23.2
[notice] To update, run: C:\sd\sdwebui\webui\venv\Scripts\python.exe -m pip install --upgrade pip
Launching Web UI with arguments: --xformers --reinstall-xformers --precision full --no-half --skip-torch-cuda-test --opt-split-attention
Check config files...
Done
Loading weights [cb15a7187a] from C:\sd\sdwebui\webui\models\Stable-diffusion\Deliberate-inpainting.safetensors
Creating model from config: C:\sd\sdwebui\webui\configs\v1-inpainting-inference.yaml
LatentInpaintDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.54 M params.
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 84.3s (launcher: 11.6s, import torch: 31.4s, import gradio: 8.7s, setup paths: 10.6s, other imports: 8.2s, opts onchange: 0.4s, setup codeformer: 0.4s, list SD models: 0.3s, load scripts: 11.1s, create ui: 1.0s, gradio launch: 0.5s).
Applying attention optimization: xformers... done.
Model loaded in 10.0s (load weights from disk: 0.8s, create model: 1.2s, apply weights to model: 6.4s, move model to device: 1.5s).
Loading weights [f36b3ca4d1] from C:\sd\sdwebui\webui\models\Stable-diffusion\edgeOfRealism_edgeOfRealismBakedVAE.safetensors
Creating model from config: C:\sd\sdwebui\webui\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying attention optimization: xformers... done.
Model loaded in 8.3s (create model: 0.6s, apply weights to model: 5.9s, move model to device: 1.7s).
*** Error completing request
*** Arguments: ('task(yot3zok0bchp1w0)', 'pov of a beautiful asian woman, formal dress, perfect eyes, petite body, in the forest, colorful, yellow leaves, autumn, hair bun, black hair, facing the viewer, bokeh, soft lighting, perfect face, eye contact, brown eyes, <lora:iu_eor_new-000017:1>', 'badhandv4 easynegative ng_deepnegative_v1_75t', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 0, '', '', [], <gradio.routes.Request object at 0x000001E14BCBFBB0>, 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, '', 'None', 30, 4, 0, 0, False, 'None', '<br>', 'None', 30, 4, 0, 0, 4, 0.4, True, 32) {}
Traceback (most recent call last):
File "C:\sd\sdwebui\webui\modules\call_queue.py", line 58, in f
res = list(func(*args, **kwargs))
File "C:\sd\sdwebui\webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\sd\sdwebui\webui\modules\txt2img.py", line 62, in txt2img
processed = processing.process_images(p)
File "C:\sd\sdwebui\webui\modules\processing.py", line 677, in process_images
res = process_images_inner(p)
File "C:\sd\sdwebui\webui\modules\processing.py", line 783, in process_images_inner
p.setup_conds()
File "C:\sd\sdwebui\webui\modules\processing.py", line 1191, in setup_conds
super().setup_conds()
File "C:\sd\sdwebui\webui\modules\processing.py", line 364, in setup_conds
self.uc = self.get_conds_with_caching(prompt_parser.get_learned_conditioning, negative_prompts, self.steps * self.step_multiplier, [self.cached_uc], self.extra_network_data)
File "C:\sd\sdwebui\webui\modules\processing.py", line 353, in get_conds_with_caching
cache[1] = function(shared.sd_model, required_prompts, steps)
File "C:\sd\sdwebui\webui\modules\prompt_parser.py", line 163, in get_learned_conditioning
conds = model.get_learned_conditioning(texts)
File "C:\sd\sdwebui\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 669, in get_learned_conditioning
c = self.cond_stage_model(c)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\sd\sdwebui\webui\modules\sd_hijack_clip.py", line 234, in forward
z = self.process_tokens(tokens, multipliers)
File "C:\sd\sdwebui\webui\modules\sd_hijack_clip.py", line 271, in process_tokens
z = self.encode_with_transformers(tokens)
File "C:\sd\sdwebui\webui\modules\sd_hijack_clip.py", line 324, in encode_with_transformers
outputs = self.wrapped.transformer(input_ids=tokens, output_hidden_states=-opts.CLIP_stop_at_last_layers)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 811, in forward
return self.text_model(
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 721, in forward
encoder_outputs = self.encoder(
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 650, in forward
layer_outputs = encoder_layer(
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 389, in forward
hidden_states = self.mlp(hidden_states)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\transformers\models\clip\modeling_clip.py", line 344, in forward
hidden_states = self.fc1(hidden_states)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\sd\sdwebui\webui\extensions-builtin\Lora\networks.py", line 357, in network_Linear_forward
return network_forward(self, input, torch.nn.Linear_forward_before_network)
File "C:\sd\sdwebui\webui\extensions-builtin\Lora\networks.py", line 345, in network_forward
y = module.forward(y, input)
File "C:\sd\sdwebui\webui\extensions-builtin\Lora\network_lora.py", line 84, in forward
return y + self.up_model(self.down_model(x)) * self.multiplier() * self.calc_scale()
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\sd\sdwebui\webui\extensions-builtin\Lora\networks.py", line 357, in network_Linear_forward
return network_forward(self, input, torch.nn.Linear_forward_before_network)
File "C:\sd\sdwebui\webui\extensions-builtin\Lora\networks.py", line 337, in network_forward
y = original_forward(module, input)
File "C:\sd\sdwebui\webui\venv\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (77x3072 and 768x128)
```
### Additional information
None of the extensions listed are used in the context of this issue | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12466 | null | {'base_commit': '458eda13211ac3498485f1e5154d90808fbcfb60', 'files': [{'path': 'extensions-builtin/Lora/networks.py', 'status': 'modified', 'Loc': {"(None, 'network_forward', 338)": {'mod': [360]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"extensions-builtin/Lora/networks.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | 14d03f60ed366df942be09ee4bc394a69958e09c | https://github.com/scikit-learn/scikit-learn/issues/2185 | Bug
Moderate | MinibatchKMeans bad center reallocation causes duplicate centers | For instance have a look at:
http://scikit-learn.org/dev/auto_examples/cluster/plot_dict_face_patches.html
some of the centroids are duplicated, presumably because of a bug in the bad cluster reallocation heuristic.
| null | https://github.com/scikit-learn/scikit-learn/pull/3376 | null | {'base_commit': '14d03f60ed366df942be09ee4bc394a69958e09c', 'files': [{'path': 'sklearn/cluster/k_means_.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [28]}, "(None, '_labels_inertia_precompute_dense', 399)": {'add': [411], 'mod': [399, 402, 403, 409]}, "(None, '_labels_inertia', 416)": {'add': [433, 451], 'mod': [418, 420, 443, 444, 449, 458]}, "(None, '_mini_batch_step', 784)": {'add': [862], 'mod': [789, 794, 797, 800, 803, 807, 809, 812, 817, 818, 819, 821, 824, 828, 829, 839, 840, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851, 853, 854, 855, 856]}, "('KMeans', None, 543)": {'mod': [553, 557, 575, 578, 581, 582, 583, 604, 605]}, "('KMeans', 'transform', 718)": {'mod': [719]}, "('MiniBatchKMeans', None, 969)": {'mod': [983, 990, 1010, 1029, 1038]}, "('MiniBatchKMeans', 'fit', 1081)": {'mod': [1162]}, "('MiniBatchKMeans', 'partial_fit', 1242)": {'mod': [1260, 1279]}}}, {'path': 'sklearn/cluster/tests/test_k_means.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [314]}, "(None, 'test_minibatch_reassign', 315)": {'add': [357], 'mod': [320, 323, 332, 337, 338, 339, 340, 345, 349, 355]}}}, {'path': 'sklearn/utils/setup.py', 'status': 'modified', 'Loc': {"(None, 'configuration', 7)": {'mod': [67, 68]}}}, {'path': 'sklearn/utils/tests/test_extmath.py', 'status': 'modified', 'Loc': {"(None, 'test_random_weights', 61)": {'mod': [75, 76]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/utils/setup.py",
"sklearn/cluster/k_means_.py"
],
"doc": [],
"test": [
"sklearn/cluster/tests/test_k_means.py",
"sklearn/utils/tests/test_extmath.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | c8f3d07e86dd41074971b5423fb932c2eda6db1e | https://github.com/scrapy/scrapy/issues/3341 | Overriding the MailSender class | I'd like to use the built-in email notification service for when a scraper exceeds a certain memory limit (`MEMUSAGE_NOTIFY_MAIL` setting), but it looks like it's not possible to specify the MailSender class to use to send the email. I don't want to use SMTP, I'd like to use a third-party mail sender (e.g. sendgrid).
Is there a way around this?
Thanks | null | https://github.com/scrapy/scrapy/pull/3346 | null | {'base_commit': 'c8f3d07e86dd41074971b5423fb932c2eda6db1e', 'files': [{'path': 'docs/topics/email.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [70, 108], 'mod': [11, 12, 13, 14, 15, 17, 18, 20, 21, 23, 24, 26, 27, 29, 30, 32, 34, 36, 38, 39, 41, 42, 83, 114, 115]}}}, {'path': 'docs/topics/settings.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [182]}}}, {'path': 'scrapy/extensions/memusage.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17], 'mod': [16]}, "('MemoryUsage', '__init__', 24)": {'mod': [36, 37, 38, 39]}, "('MemoryUsage', '_check_limit', 77)": {'mod': [80, 81, 82, 84, 85]}, "('MemoryUsage', '_check_warning', 96)": {'mod': [97, 101, 102, 103, 105, 106]}, "('MemoryUsage', '_send_report', 111)": {'mod': [114, 115, 116, 118]}}}, {'path': 'scrapy/extensions/statsmailer.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9], 'mod': [8]}, "('StatsMailer', None, 11)": {'add': [12], 'mod': [11]}, "('StatsMailer', 'from_crawler', 19)": {'mod': [23]}}}, {'path': 'scrapy/mail.py', 'status': 'modified', 'Loc': {"('MailSender', 'send', 58)": {'add': [100], 'mod': [59, 60, 61, 62, 64, 65, 67, 68, 69, 70, 71, 72, 73, 74, 76, 77, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89]}, "('MailSender', '_sendmail', 122)": {'add': [137]}, "('MailSender', None, 39)": {'mod': [39]}, "('MailSender', 'from_settings', 53)": {'mod': [54, 55, 56]}}}, {'path': 'scrapy/settings/default_settings.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [51]}}}, {'path': 'scrapy/utils/test.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [28]}}}, {'path': 'tests/test_mail.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5], 'mod': [4, 7, 9, 11, 12, 13, 14, 16, 18, 19, 20, 22, 23, 24, 25, 26, 28, 29, 30, 31, 33, 34, 35, 36, 37, 39, 40, 41, 43]}, "('MailSenderTest', 'test_send_attach', 43)": {'mod': [49, 50, 51, 53, 54, 55, 56, 58, 59, 60]}, "('MailSenderTest', None, 9)": {'mod': [71, 72, 74, 91]}, "('MailSenderTest', 'test_send_utf8', 74)": {'mod': [77, 78, 79, 81, 82, 83, 85, 86]}, "('MailSenderTest', 'test_send_attach_utf8', 91)": {'mod': [99, 100, 101, 102, 104, 105, 106, 108, 109, 111, 112]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/extensions/statsmailer.py",
"scrapy/settings/default_settings.py",
"scrapy/extensions/memusage.py",
"scrapy/mail.py"
],
"doc": [
"docs/topics/settings.rst",
"docs/topics/email.rst"
],
"test": [
"tests/test_mail.py",
"scrapy/utils/test.py"
],
"config": [],
"asset": []
} | 1 | |
3b1b | manim | dbdd7996960ba46ed044a773290b02f17478c760 | https://github.com/3b1b/manim/issues/1059 | Impossible to open 'CC:/manim/manim_3_feb/media/videos/example_scenes/480p15/partial_movie_files/SquareToCircle/00000.mp4' | 
Help me solve this | null | https://github.com/3b1b/manim/pull/1057 | null | {'base_commit': 'dbdd7996960ba46ed044a773290b02f17478c760', 'files': [{'path': 'manimlib/scene/scene_file_writer.py', 'status': 'modified', 'Loc': {"('SceneFileWriter', 'combine_movie_files', 253)": {'mod': [289]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"manimlib/scene/scene_file_writer.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
python | cpython | ae00b810d1d3ad7f1f7e226b02ece37c986330e7 | https://github.com/python/cpython/issues/104803 | OS-windows | Allow detecting Dev Drive on Windows | Windows just announced a new [Dev Drive](https://learn.microsoft.com/en-us/windows/dev-drive/) feature, optimised for high I/O scenarios such as build and test. It also works as a very clear signal that the user is a developer and is doing developer-like tasks.
We should add a function to allow querying whether a specific path is on a Dev Drive. The API is relatively low level, and cannot currently be used from Python, but would allow Python apps to detect when the user is operating on a Dev Drive (e.g. installing or compiling something on one), or choose or offer a more performant temporary or cache location than the user directory.
(For a variety of mostly compatibility reasons, there's no way for Windows to redirect `%TEMP%` onto a Dev Drive, but apps that are aware of it can do it for themselves.)
<!-- gh-linked-prs -->
### Linked PRs
* gh-104805
* gh-105054
<!-- /gh-linked-prs -->
| null | https://github.com/python/cpython/pull/104805 | null | {'base_commit': 'ae00b810d1d3ad7f1f7e226b02ece37c986330e7', 'files': [{'path': 'Doc/library/os.path.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [306]}}}, {'path': 'Lib/ntpath.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [869]}}}, {'path': 'Lib/test/test_ntpath.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [994]}}}, {'path': 'Modules/clinic/posixmodule.c.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1717, 11381], 'mod': [11925]}}}, {'path': 'Modules/posixmodule.c', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4532, 15799]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"Modules/clinic/posixmodule.c.h",
"Lib/ntpath.py",
"Modules/posixmodule.c"
],
"doc": [
"Doc/library/os.path.rst"
],
"test": [
"Lib/test/test_ntpath.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 2814e0e1972fa38151b6800c881d49f50edf9c6b | https://github.com/scrapy/scrapy/issues/5226 | enhancement
good first issue
docs | Document Reppy Python version support | The optional dependency on reppy for one of the built-in robots.txt parsers is [preventing us from running the extra-dependencies CI job with Python 3.9+](https://github.com/seomoz/reppy/issues/122). https://github.com/seomoz/reppy has not have a commit for ~1.5 years.
So I think we should deprecate the component.
If we don’t, we should document this limitation, and schedule a deprecation for 1 year before Python 3.8 reaches end of life, ~~i.e. in 9 months~~, because once we drop Python 3.8 support we will be forced to remove this component anyway, so giving a deprecation warning 1 year before is probably in the best interest of any user of the component. | null | https://github.com/scrapy/scrapy/pull/5231 | null | {'base_commit': '2814e0e1972fa38151b6800c881d49f50edf9c6b', 'files': [{'path': 'docs/topics/downloader-middleware.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1072]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docs/topics/downloader-middleware.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
psf | requests | 9968a10fcfad7268b552808c4f8946eecafc956a | https://github.com/psf/requests/issues/1650 | Requests doesn't catch requests.packages.urllib3.exceptions.ProxyError | Requests doesn't catch requests.packages.urllib3.exceptions.ProxyError and translate it into a requests module specific exception which derives from RequestException as it does for other errors originating from urllib3. This means if trying to catch any exception derived from RequestException so as to treat it specially, the urllib3 ProxyError will be missed.
| null | https://github.com/psf/requests/pull/1651 | null | {'base_commit': '9968a10fcfad7268b552808c4f8946eecafc956a', 'files': [{'path': 'requests/adapters.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [24], 'mod': [26]}, "('HTTPAdapter', 'send', 283)": {'add': [355]}}}, {'path': 'requests/exceptions.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [29]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"requests/adapters.py",
"requests/exceptions.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
AUTOMATIC1111 | stable-diffusion-webui | 7f8ab1ee8f304031b3404e25761dd0f4c7be7df8 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/873 | enhancement | Outpainting script does not save multiple images when using batch sliders | When using the batch-count slider and the batch-size slider, the outpainting script does not save multiple images, but just the first one.
Looking at the console window we can see the actual processing is happening for all the N images (batch-count * batch-size), but at the end of the process only the first one is saved to disk.
| null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/3244 | null | {'base_commit': '7f8ab1ee8f304031b3404e25761dd0f4c7be7df8', 'files': [{'path': 'scripts/outpainting_mk_2.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [262]}, "('Script', 'run', 142)": {'mod': [175, 177, 179, 245, 247, 248, 249, 250, 251, 252, 253, 254, 256, 259, 261]}, "('Script', 'expand', 179)": {'mod': [185, 186, 187, 188, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 201, 202, 203, 204, 206, 207, 209, 210, 211, 212, 213, 214, 216, 217, 219, 220, 221, 222, 235, 241, 242, 243]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scripts/outpainting_mk_2.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 803eb82362278b755127649e9bb5f385639a23ca | https://github.com/AntonOsika/gpt-engineer/issues/613 | good first issue
sweep | Add numpy doc strings | Add numpy style doc strings to all functions apart from the main.py file.
<details>
<summary>Checklist</summary>
- [X] `gpt_engineer/ai.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
- [X] `gpt_engineer/chat_to_files.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
- [X] `gpt_engineer/collect.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
- [X] `gpt_engineer/db.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
- [X] `gpt_engineer/learning.py`
> • For each function in this file, add or replace the existing docstring with a numpy-style docstring. The docstring should include a brief description of the function, a list of parameters with their types and descriptions, and a description of the return value.
</details>
| null | https://github.com/AntonOsika/gpt-engineer/pull/615 | null | {'base_commit': '803eb82362278b755127649e9bb5f385639a23ca', 'files': [{'path': 'gpt_engineer/ai.py', 'status': 'modified', 'Loc': {"('AI', None, 39)": {'add': [40, 52, 59, 62, 65, 97, 101, 127, 141, 144]}, "('AI', 'next', 68)": {'add': [77]}, "('AI', 'update_token_usage_log', 104)": {'add': [106]}, "(None, 'fallback_model', 156)": {'add': [156]}, "(None, 'create_chat_model', 169)": {'add': [169]}, "(None, 'get_tokenizer', 188)": {'add': [188]}}}, {'path': 'gpt_engineer/chat_to_files.py', 'status': 'modified', 'Loc': {"(None, 'to_files', 44)": {'add': [44]}, "(None, 'parse_chat', 7)": {'mod': [9, 10]}, "(None, 'overwrite_files', 52)": {'mod': [54]}, "(None, 'get_code_strings', 69)": {'mod': [71]}, "(None, 'format_file_to_input', 84)": {'mod': [86]}}}, {'path': 'gpt_engineer/collect.py', 'status': 'modified', 'Loc': {"(None, 'send_learning', 11)": {'add': [12, 19]}, "(None, 'collect_learnings', 33)": {'add': [33]}, "(None, 'steps_file_hash', 55)": {'add': [55]}}}, {'path': 'gpt_engineer/db.py', 'status': 'modified', 'Loc': {"('DB', None, 9)": {'add': [12, 17, 20, 28, 34]}, "(None, 'archive', 56)": {'add': [56]}, "('DB', '__setitem__', 34)": {'mod': [41]}}}, {'path': 'gpt_engineer/learning.py', 'status': 'modified', 'Loc': {"(None, 'human_review_input', 54)": {'add': [54]}, "(None, 'check_consent', 98)": {'add': [98]}, "(None, 'collect_consent', 115)": {'add': [115]}, "(None, 'ask_if_can_store', 130)": {'add': [130]}, "(None, 'logs_to_string', 149)": {'add': [149]}, "(None, 'extract_learning', 157)": {'add': [159]}, "(None, 'get_session', 178)": {'mod': [179]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"gpt_engineer/learning.py",
"gpt_engineer/db.py",
"gpt_engineer/chat_to_files.py",
"gpt_engineer/ai.py",
"gpt_engineer/collect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
abi | screenshot-to-code | b9522fede2835b3c3b4728e1d005541087ec2208 | https://github.com/abi/screenshot-to-code/issues/29 | Allow user to open the preview website in a new window | null | null | https://github.com/abi/screenshot-to-code/pull/99 | null | {'base_commit': 'b9522fede2835b3c3b4728e1d005541087ec2208', 'files': [{'path': 'frontend/src/App.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [99, 316], 'mod': [322, 323, 324, 325, 326, 327, 328]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"frontend/src/App.tsx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
huggingface | transformers | eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d | https://github.com/huggingface/transformers/issues/8171 | New model | Need suggestion on contributing TFDPR | # 🌟 New model addition
## Model description
Hi, I would love to try contributing TFDPR . This is the first time to me, so I need some suggestions.
I have followed @sshleifer 's [great PR on TFBart model](https://github.com/huggingface/transformers/commit/829842159efeb1f920cbbb1daf5ad67e0114d0b9) on 4 files :` __init__.py , convert_pytorch_checkpoint_to_tf2.py , utils/dummy_tf_objects.py` and (newly created) `modeling_tf_dpr.py `
Now the TF model works properly and can load Pytorch's weights successfully the same output as Pytorch's counterparts **except** small random noise (1e-5) which I suspect of some dtypes different , but I could not find the cause.
I guess I need to add document on docs/source/model_doc/dpr.rst , and that's all ?
**My question is do I need to change / fix any other files ? and/or do I need to do some other thing before making PR ?**
<!-- Important information -->
To resolve TF vs. Pytorch naming issues, there's one change regarding `TFBertModel` vs. `TFBertMainLayer` as [discussed here](https://discuss.huggingface.co/t/solved-issue-on-translating-dpr-to-tfdpr-on-loading-pytorch-weights-to-tf-model/1764) .
Thanks to @sshleifer for his help to solve the issue.
## Open source status
* [X] the model implementation is available: (give details)
You can see all the modified codes with test run at :
https://colab.research.google.com/drive/1lU4fx7zkr-Y3CXa3wmHIY8yJhKdiN3DI?usp=sharing
(to easily navigate the changes, please “find on page” for e.g. `TFDPRContextEncoder` )
* [X] the model weights are available: (give details)
At the moment, I use existing Pytorch weights, but will upload TF weights too.
* [X] who are the authors: (mention them, if possible by @gh-username)
@ratthachat | null | https://github.com/huggingface/transformers/pull/8203 | null | {'base_commit': 'eb3bd73ce35bfef56eeb722d697f2d39a06a8f8d', 'files': [{'path': 'docs/source/model_doc/dpr.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [101]}}}, {'path': 'src/transformers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [408, 715]}}}, {'path': 'src/transformers/convert_pytorch_checkpoint_to_tf2.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27, 45, 61, 100, 149]}}}, {'path': 'src/transformers/modeling_tf_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45, 89, 194]}}}, {'path': 'src/transformers/utils/dummy_pt_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [737]}}}, {'path': 'src/transformers/utils/dummy_tf_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [497]}}}, {'path': 'tests/test_modeling_dpr.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [26]}, "('DPRModelTest', 'test_model_from_pretrained', 214)": {'add': [229]}}}, {'path': 'utils/check_repo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [35, 59, 89]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/utils/dummy_pt_objects.py",
"src/transformers/utils/dummy_tf_objects.py",
"src/transformers/__init__.py",
"src/transformers/modeling_tf_auto.py",
"utils/check_repo.py",
"src/transformers/convert_pytorch_checkpoint_to_tf2.py"
],
"doc": [
"docs/source/model_doc/dpr.rst"
],
"test": [
"tests/test_modeling_dpr.py"
],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | 9bee9ff5db6e68fb31065898d7e924d07c1eb9c1 | https://github.com/huggingface/transformers/issues/34390 | bug | [mask2former] torch.export error for Mask2Former | ### System Info
- `transformers` version: 4.46.0.dev0
- Platform: Linux-6.8.0-47-generic-x86_64-with-glibc2.35
- Python version: 3.11.9
- Huggingface_hub version: 0.25.2
- Safetensors version: 0.4.5
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.4.1+cu121 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using distributed or parallel set-up in script?: <fill in>
- Using GPU in script?: <fill in>
- GPU type: NVIDIA GeForce RTX 4090
### Who can help?
@amyeroberts, @qubvel, @ylacombe
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
import torch
from transformers import Mask2FormerForUniversalSegmentation
model = Mask2FormerForUniversalSegmentation.from_pretrained(
"facebook/mask2former-swin-base-coco-panoptic", torchscript=True
)
scripted_model = torch.export.export(model, args=(torch.randn(1, 3, 800, 1280),))
```
which causes
```
UserError: Could not extract specialized integer from data-dependent expression u0 (unhinted: u0). (Size-like symbols: none)
Potential framework code culprit (scroll up for full backtrace):
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/_dynamo/utils.py", line 2132, in run_node
return node.target(*args, **kwargs)
For more information, run with TORCH_LOGS="dynamic"
For extended logs when we create symbols, also add TORCHDYNAMO_EXTENDED_DEBUG_CREATE_SYMBOL="u0"
If you suspect the guard was triggered from C++, add TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more debugging help, see https://docs.google.com/document/d/1HSuTTVvYH1pTew89Rtpeu84Ht3nQEFTYhAX3Ypa_xJs/edit?usp=sharing
User Stack (most recent call last):
(snipped, see stack below for prefix)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 2499, in forward
outputs = self.model(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 2270, in forward
pixel_level_module_output = self.pixel_level_module(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1395, in forward
decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1319, in forward
encoder_outputs = self.encoder(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1165, in forward
reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=inputs_embeds.device)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1106, in get_reference_points
torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device),
For C++ stack trace, run with TORCHDYNAMO_EXTENDED_DEBUG_CPP=1
For more information about this error, see: https://pytorch.org/docs/main/generated/exportdb/index.html#constrain-as-size-example
from user code:
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 2499, in forward
outputs = self.model(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 2270, in forward
pixel_level_module_output = self.pixel_level_module(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1395, in forward
decoder_output = self.decoder(backbone_features, output_hidden_states=output_hidden_states)
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1319, in forward
encoder_outputs = self.encoder(
File "/home/philkuz/.pyenv/versions/3.11.9/envs/gml311/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1747, in _call_impl
return forward_call(*args, **kwargs)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1165, in forward
reference_points = self.get_reference_points(spatial_shapes, valid_ratios, device=inputs_embeds.device)
File "/home/philkuz/dev/transformers/src/transformers/models/mask2former/modeling_mask2former.py", line 1106, in get_reference_points
torch.linspace(0.5, height - 0.5, height, dtype=valid_ratios.dtype, device=device),
```
### Expected behavior
torch.export works for this model. | null | https://github.com/huggingface/transformers/pull/34393 | null | {'base_commit': '9bee9ff5db6e68fb31065898d7e924d07c1eb9c1', 'files': [{'path': 'src/transformers/models/mask2former/modeling_mask2former.py', 'status': 'modified', 'Loc': {"('Mask2FormerPixelDecoder', 'forward', 1280)": {'add': [1333], 'mod': [1305, 1307, 1323, 1337, 1339, 1341, 1345]}, "('Mask2FormerPixelDecoderEncoderMultiscaleDeformableAttention', 'forward', 921)": {'mod': [929, 939, 960, 973]}, "('Mask2FormerPixelDecoderEncoderLayer', 'forward', 998)": {'mod': [1004, 1018, 1019, 1036]}, "('Mask2FormerPixelDecoderEncoderOnly', None, 1069)": {'mod': [1089]}, "('Mask2FormerPixelDecoderEncoderOnly', 'get_reference_points', 1089)": {'mod': [1094, 1095, 1104]}, "('Mask2FormerPixelDecoderEncoderOnly', 'forward', 1120)": {'mod': [1125, 1143, 1144, 1165, 1179]}, "('Mask2FormerMaskedAttentionDecoder', 'forward', 1792)": {'mod': [1879]}}}, {'path': 'tests/models/mask2former/test_modeling_mask2former.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22]}, "('Mask2FormerModelIntegrationTest', 'test_with_segmentation_maps_and_loss', 466)": {'add': [483]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/models/mask2former/modeling_mask2former.py"
],
"doc": [],
"test": [
"tests/models/mask2former/test_modeling_mask2former.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 19d0942c74731d797a3590b1d8d46ece5a6d751f | https://github.com/scrapy/scrapy/issues/3077 | bug
upstream issue | scrapy selector fails when large lines are present response | Originally encoutered when scraping [Amazon restaurant](https://www.amazon.com/restaurants/zzzuszimbos0015gammaloc1name-new-york/d/B01HH7CS44?ref_=amzrst_pnr_cp_b_B01HH7CS44_438).
This page contains multiple script tag with lines greater then 64,000 character in one line.
The selector (xpath and css) does not search beyond these lines.
Due to this the following xpath `'//h1[contains(@class, "hw-dp-restaurant-name")]/text()'` to extract name of the restaurant returns empty even though there is a matching tag is present.
PFA the response text at [original_response.html.txt.gz](https://github.com/scrapy/scrapy/files/1631425/original_response.html.txt.gz)
| null | https://github.com/scrapy/scrapy/pull/261 | null | {'base_commit': '19d0942c74731d797a3590b1d8d46ece5a6d751f', 'files': [{'path': 'docs/contributing.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [76]}}}, {'path': 'scrapy/tests/test_utils_url.py', 'status': 'modified', 'Loc': {"('UrlUtilsTest', None, 8)": {'add': [50]}, '(None, None, None)': {'mod': [3, 4]}, "('MySpider', 'test_url_is_from_spider_with_allowed_domains_class_attributes', 52)": {'mod': [54]}}}, {'path': 'scrapy/utils/url.py', 'status': 'modified', 'Loc': {"(None, 'url_is_from_spider', 25)": {'mod': [27, 28]}, "(None, 'canonicalize_url', 33)": {'mod': [33]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/utils/url.py"
],
"doc": [
"docs/contributing.rst"
],
"test": [
"scrapy/tests/test_utils_url.py"
],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | 953757a3e37ffb80570a20a8eca52dae35fc27bb | https://github.com/pandas-dev/pandas/issues/22471 | Testing
Clean
good first issue | TST/CLN: remove TestData from frame-tests; replace with fixtures | Following review in #22236:
> ok, pls open a new issue that refs this, to remove use of `TestData` in favor of fixtures
Started the process in that PR by creating a `conftest.py` that translates all the current attributes of `TestData` to fixtures, with the following "translation guide":
* `frame` -> `float_frame`
* `frame2` -> `float_frame2`
* `intframe` -> `int_frame`
* `tsframe` -> `datetime_frame`
* `mixed_frame` -> `float_string_frame`
* `mixed_float` -> `mixed_float_frame`
* `mixed_float2` -> `mixed_float_frame2`
* `mixed_int` -> `mixed_int_frame`
* `all_mixed` -> `mixed_type_frame`
* `tzframe` -> `timezone_frame`
* `empty` -> `empty_frame`
* `ts1` -> `datetime_series`
* `ts2` -> `datetime_series_short`
* `simple` -> `simple_frame`
Need to incrementally replace their usages in `pandas/tests/frame/` (example below).
- [x] Create `conftest.py` and translate `TestData`-attributes into fixtures (#22236)
- [x] `test_alter_axes.py` (#22236)
- [x] `test_analytics.py` (#22733)
- [x] `test_api.py` (#22738)
- [x] `test_apply.py` (#22735)
- [x] `test_arithmetic.py` (#22736)
- [x] `test_asof.py` (#25628)
- [x] `test_axis_select_reindex.py` (#25627)
- [x] `test_block_internals.py` (#22926)
- [x] `test_combine_concat.py` (#25634)
- [ ] `test_constructors.py` (#25635)
- [ ] `test_convert_to.py`
- [ ] `test_dtypes.py` (#25636)
- [x] `test_duplicates.py`
- [x] `test_indexing.py` (#25633)
- [x] `test_join.py` (#25639)
- [x] `test_missing.py` (#25640)
- [x] `test_mutate_columns.py` (#25642)
- [ ] `test_nonunique_indexes.py`
- [x] `test_operators.py` (#25641)
- [ ] `test_period.py`
- [ ] `test_quantile.py`
- [ ] `test_query_eval.py`
- [ ] `test_rank.py`
- [ ] `test_replace.py`
- [ ] `test_repr_info.py`
- [ ] `test_reshape.py`
- [ ] `test_sort_values_level_as_str.py`
- [ ] `test_sorting.py`
- [ ] `test_subclass.py`
- [ ] `test_timeseries.py`
- [ ] `test_timezones.py`
- [ ] `test_to_csv.py`
- [ ] `test_validate.py`
Things for follow-ups:
- Remove other class-based test-methods
- Turn tests from class- to function-based
An example from #22236 - before:
```
def test_set_columns(self):
cols = Index(np.arange(len(self.mixed_frame.columns)))
self.mixed_frame.columns = cols
with tm.assert_raises_regex(ValueError, 'Length mismatch'):
self.mixed_frame.columns = cols[::2]
```
After:
```
def test_set_columns(self, float_string_frame):
cols = Index(np.arange(len(float_string_frame.columns)))
float_string_frame.columns = cols
with tm.assert_raises_regex(ValueError, 'Length mismatch'):
float_string_frame.columns = cols[::2]
```
Basically, it comes down to replacing all the occurrences of `self.<name>` with `translation_guide[<name>]` (and specifying`<name>` as a parameter to the function).
PS. Note that some fixtures added by #22236 have now been removed by #24885. Please check #24885 which code was removed, in case you should need it for the fixturisation. Alternatively, you can ping me, @jbrockmendel or @jreback. | null | https://github.com/pandas-dev/pandas/pull/29226 | null | {'base_commit': '953757a3e37ffb80570a20a8eca52dae35fc27bb', 'files': [{'path': 'pandas/tests/frame/common.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 3, 5, 6, 8, 9, 11, 12, 13, 15, 17, 18, 21, 22, 23, 24, 26, 27, 28, 30, 31, 32, 33, 35, 36, 37, 39, 40, 41, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 102, 103, 104, 106, 107, 108, 110, 111, 112, 114, 115, 116, 118, 121, 122]}}}, {'path': 'pandas/tests/frame/test_indexing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [28]}, "('TestDataFrameIndexing', None, 39)": {'mod': [39]}, "('TestDataFrameIndexing', 'test_setitem_fancy_mixed_2d', 1166)": {'mod': [1170, 1171]}, "('TestDataFrameIndexingDatetimeWithTZ', None, 3405)": {'mod': [3405]}, "('TestDataFrameIndexingUInt64', None, 3464)": {'mod': [3464]}}}, {'path': 'pandas/tests/frame/test_query_eval.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12]}, "('TestDataFrameQueryNumExprPython', 'setup_class', 703)": {'mod': [707]}, "('TestDataFrameQueryPythonPandas', 'setup_class', 807)": {'mod': [811]}, "('TestDataFrameQueryPythonPython', 'setup_class', 827)": {'mod': [830]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/tests/frame/common.py"
],
"doc": [],
"test": [
"pandas/tests/frame/test_indexing.py",
"pandas/tests/frame/test_query_eval.py"
],
"config": [],
"asset": []
} | null |
Significant-Gravitas | AutoGPT | 98efd264560983ed1d383222e3d5d22ed87169be | https://github.com/Significant-Gravitas/AutoGPT/issues/75 | API access | API Rate Limit Reached with new key | I just create a new key and it's failing to run:
```
Continue (y/n): y
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
Error: API Rate Limit Reached. Waiting 10 seconds...
```

| null | https://github.com/Significant-Gravitas/AutoGPT/pull/1304 | null | {'base_commit': '98efd264560983ed1d383222e3d5d22ed87169be', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [147], 'mod': [108]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
oobabooga | text-generation-webui | 6a03ad082492268d60fa23ba5f3dcebd1630593e | https://github.com/oobabooga/text-generation-webui/issues/317 | enhancement | Support for ChatGLM | **Description**
[ChatGLM-6B](https://github.com/THUDM/ChatGLM-6B)
A Chinese chat AI based on GLM was released by THU.
| null | https://github.com/oobabooga/text-generation-webui/pull/1256 | null | {'base_commit': '6a03ad082492268d60fa23ba5f3dcebd1630593e', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [221]}}}, {'path': 'download-model.py', 'status': 'modified', 'Loc': {"(None, 'get_download_links_from_huggingface', 82)": {'mod': [111]}}}, {'path': 'models/config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47]}}}, {'path': 'modules/chat.py', 'status': 'modified', 'Loc': {"(None, 'generate_chat_prompt', 21)": {'mod': [52, 63]}}}, {'path': 'modules/models.py', 'status': 'modified', 'Loc': {"(None, 'load_model', 41)": {'add': [46, 122], 'mod': [50, 82, 159, 168, 188]}, '(None, None, None)': {'mod': [13, 14]}}}, {'path': 'modules/shared.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [115, 164]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"modules/shared.py",
"modules/chat.py",
"download-model.py",
"modules/models.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [
"models/config.yaml"
],
"asset": []
} | 1 |
scikit-learn | scikit-learn | 130601e076ec5ca8298b95c3d02122ac5d8cf8eb | https://github.com/scikit-learn/scikit-learn/issues/2372 | Bug
Moderate | StratifiedKFold should do its best to preserve the dataset dependency structure | As highlighted in this [notebook](http://nbviewer.ipython.org/urls/raw.github.com/ogrisel/notebooks/master/Non%2520IID%2520cross-validation.ipynb) the current implementation of `StratifiedKFold` (which is used by default by `cross_val_score` and `GridSearchCV` for classification problems) breaks the dependency structure of the dataset by computing the folds based on the sorted labels.
Instead one should probably do an implementation that performs individual dependency preserving KFold on for each possible label value and aggregate the folds to get the `StratifiedKFold` final folds.
This might incur a refactoring to get rid of the `_BaseKFold` base class. It might also make it easier to implement a `shuffle=True` option for `StratifiedKFold`.
| null | https://github.com/scikit-learn/scikit-learn/pull/2463 | null | {'base_commit': '130601e076ec5ca8298b95c3d02122ac5d8cf8eb', 'files': [{'path': 'doc/modules/cross_validation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [108, 109, 115, 122, 123, 124, 125, 200, 201, 205, 206, 209, 210]}}}, {'path': 'doc/tutorial/statistical_inference/model_selection.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [146, 148, 149, 150, 151, 166, 167]}}}, {'path': 'doc/whats_new.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [46, 2290], 'mod': [784]}}}, {'path': 'sklearn/cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}, "('StratifiedKFold', '__init__', 375)": {'add': [385], 'mod': [378, 379]}, "('StratifiedKFold', None, 335)": {'mod': [388, 389, 390, 391, 392]}}}, {'path': 'sklearn/feature_selection/tests/test_rfe.py', 'status': 'modified', 'Loc': {"(None, 'test_rfecv', 64)": {'add': [78], 'mod': [72, 80, 85, 86, 87, 90, 96, 97, 101, 106, 107]}}}, {'path': 'sklearn/tests/test_cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 24, 93], 'mod': [152]}, "(None, 'test_kfold_valueerrors', 95)": {'add': [112], 'mod': [103, 104]}, "(None, 'test_kfold_indices', 127)": {'mod': [130, 131, 132, 133, 134, 135, 137, 138]}, "(None, 'test_shuffle_kfold', 153)": {'mod': [156, 157, 160, 161, 162, 163, 164, 165, 166, 167, 168, 169, 170, 171, 172, 174, 175]}, "(None, 'test_cross_val_score_with_score_func_classification', 376)": {'mod': [382, 388, 394, 399]}, "(None, 'test_permutation_score', 429)": {'mod': [453, 473, 480]}}}, {'path': 'sklearn/tests/test_naive_bayes.py', 'status': 'modified', 'Loc': {"(None, 'test_check_accuracy_on_digits', 330)": {'mod': [332, 333, 341, 344, 348, 351, 355]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/cross_validation.py"
],
"doc": [
"doc/modules/cross_validation.rst",
"doc/tutorial/statistical_inference/model_selection.rst",
"doc/whats_new.rst"
],
"test": [
"sklearn/tests/test_naive_bayes.py",
"sklearn/feature_selection/tests/test_rfe.py",
"sklearn/tests/test_cross_validation.py"
],
"config": [],
"asset": []
} | 1 |
Significant-Gravitas | AutoGPT | 6ff8478118935b72c35f3ec1b31e74f2a1aa2e90 | https://github.com/Significant-Gravitas/AutoGPT/issues/528 | enhancement
good first issue
potential plugin
Stale | Auto-GPT System Awareness | ### System Awareness
- [X] I have searched the existing issues
### Summary 💡
Before going out to look at the internet
It would be helpful if upon activation the AI took inventory of the system it was on and shared the available tools and capabilities
and if they were insufficient begin researching and developing GAP tools to use during the session with the expressed request to push the GAP tools via PR back to the community
### Examples 🌈
AI System initializing
- MacOS
- Python3
- Pip
- Shell Commands available...
- Desktop App skills available...
What are your goals?
### Motivation 🔦
usuability | null | https://github.com/Significant-Gravitas/AutoGPT/pull/4548 | null | {'base_commit': '6ff8478118935b72c35f3ec1b31e74f2a1aa2e90', 'files': [{'path': '.github/PULL_REQUEST_TEMPLATE.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [44]}}}, {'path': '.github/workflows/ci.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [72]}}}, {'path': '.pre-commit-config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [34]}}}, {'path': 'autogpt/plugins.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 5]}, "(None, 'scan_plugins', 203)": {'add': [219]}}}, {'path': 'scripts/install_plugin_deps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 32]}, "(None, 'install_plugin_dependencies', 8)": {'add': [18]}}}, {'path': 'tests/integration/test_plugins.py', 'status': 'modified', 'Loc': {"('MockConfig', 'mock_config_openai_plugin', 37)": {'mod': [42]}, "('MockConfig', 'mock_config_generic_plugin', 59)": {'mod': [63]}, "(None, 'test_scan_plugins_generic', 68)": {'mod': [71]}}}, {'path': 'tests/integration/test_web_selenium.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 4, 6]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"autogpt/plugins.py",
"scripts/install_plugin_deps.py"
],
"doc": [
".github/PULL_REQUEST_TEMPLATE.md"
],
"test": [
"tests/integration/test_web_selenium.py",
"tests/integration/test_plugins.py"
],
"config": [
".github/workflows/ci.yml",
".pre-commit-config.yaml"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 707ab7b3f84fb5664ff63da0b52e7b0d2e4df545 | https://github.com/All-Hands-AI/OpenHands/issues/908 | bug | Agent stuck in the "starting task" step--Unsupported Protocol | <!-- You MUST fill out this template. We will close issues that don't include enough information to reproduce -->
#### Describe the bug
<!-- a short description of the problem -->
I asked the agent to build a calculator, but it didn't give me any response, just stuck in the starting step.
#### Setup and configuration
**Current version**:
<!-- run `git log -n 1` to see this -->
```bash
commit e9121b78fed0b5ef36718ca0bf59588c0b094b86 (HEAD -> main)
Author: Xingyao Wang <xingyao6@illinois.edu>
Date: Sun Apr 7 16:07:59 2024 +0800
```
<!-- tell us everything about your environment -->
**My config.toml and environment vars** (be sure to redact API keys):
```toml
LLM_MODEL="gpt-3.5-turbo-1106"
LLM_API_KEY="already set, and have test in python script, which works"
LLM_EMBEDDING_MODEL="openai"
WORKSPACE_DIR="./workspace"
```
**My model and agent** (you can see these settings in the UI):
* Model: PlannerAgent
* Agent: gpt-3.5-turbo-1106
**Commands I ran to install and run OpenDevin**:
```
make build
make run
```
**Steps to Reproduce**:
run the commands, input: build a calculator with python
**Logs, error messages, and screenshots**:

backend:
```
INFO: 127.0.0.1:34564 - "GET /litellm-agents HTTP/1.1" 200 OK
INFO: 127.0.0.1:34572 - "GET /messages/total HTTP/1.1" 200 OK
INFO: 127.0.0.1:34584 - "DELETE /messages HTTP/1.1" 200 OK
==============
STEP 0
PLAN:
build a calculator with python
INFO:
HINT:
You're not currently working on any tasks. Your next action MUST be to mark a task as in_progress.
```
frontend:
```
22:35:39 - opendevin:INFO: sandbox.py:117 - Using workspace directory: /mnt/d/OpenDevin/workspace
22:35:39 - opendevin:INFO: sandbox.py:257 - Container stopped
22:35:39 - opendevin:INFO: sandbox.py:277 - Container started
22:37:54 - opendevin:INFO: sandbox.py:117 - Using workspace directory: /mnt/d/OpenDevin/workspace
```
llm prompt_001:
```
[{'content': '\n# Task\nYou\'re a diligent software engineer AI. You can\'t see, draw, or interact with a\nbrowser, but you can read and write files, and you can run commands, and you can think.\n\nYou\'ve been given the following task:\n\nbuild a calculator with python\n\n## Plan\nAs you complete this task, you\'re building a plan and keeping\ntrack of your progress. Here\'s a JSON representation of your plan:\n\n{\n "id": "0",\n "goal": "build a calculator with python",\n "state": "open",\n "subtasks": []\n}\n\n\nYou\'re not currently working on any tasks. Your next action MUST be to mark a task as in_progress.\n\nYou\'re responsible for managing this plan and the status of tasks in\nit, by using the `add_task` and `modify_task` actions described below.\n\nIf the History below contradicts the state of any of these tasks, you\nMUST modify the task using the `modify_task` action described below.\n\nBe sure NOT to duplicate any tasks. Do NOT use the `add_task` action for\na task that\'s already represented. Every task must be represented only once.\n\nTasks that are sequential MUST be siblings. They must be added in order\nto their parent task.\n\nIf you mark a task as \'completed\', \'verified\', or \'abandoned\',\nall non-abandoned subtasks will be marked the same way.\nSo before closing a task this way, you MUST not only be sure that it has\nbeen completed successfully--you must ALSO be sure that all its subtasks\nare ready to be marked the same way.\n\nIf, and only if, ALL tasks have already been marked verified,\nyou MUST respond with the `finish` action.\n\n## History\nHere is a recent history of actions you\'ve taken in service of this plan,\nas well as observations you\'ve made. This only includes the MOST RECENT\nten actions--more happened before that.\n\n[]\n\n\nYour most recent action is at the bottom of that history.\n\n## Action\nWhat is your next thought or action? Your response must be in JSON format.\n\nIt must be an object, and it must contain two fields:\n* `action`, which is one of the actions below\n* `args`, which is a map of key-value pairs, specifying the arguments for that action\n\n* `read` - reads the content of a file. Arguments:\n * `path` - the path of the file to read\n* `write` - writes the content to a file. Arguments:\n * `path` - the path of the file to write\n * `content` - the content to write to the file\n* `run` - runs a command on the command line in a Linux shell. Arguments:\n * `command` - the command to run\n * `background` - if true, run the command in the background, so that other commands can be run concurrently. Useful for e.g. starting a server. You won\'t be able to see the logs. You don\'t need to end the command with `&`, just set this to true.\n* `kill` - kills a background command\n * `id` - the ID of the background command to kill\n* `browse` - opens a web page. Arguments:\n * `url` - the URL to open\n* `think` - make a plan, set a goal, or record your thoughts. Arguments:\n * `thought` - the thought to record\n* `add_task` - add a task to your plan. Arguments:\n * `parent` - the ID of the parent task\n * `goal` - the goal of the task\n * `subtasks` - a list of subtasks, each of which is a map with a `goal` key.\n* `modify_task` - close a task. Arguments:\n * `id` - the ID of the task to close\n * `state` - set to \'in_progress\' to start the task, \'completed\' to finish it, \'verified\' to assert that it was successful, \'abandoned\' to give up on it permanently, or `open` to stop working on it for now.\n* `finish` - if ALL of your tasks and subtasks have been verified or abandoned, and you\'re absolutely certain that you\'ve completed your task and have tested your work, use the finish action to stop working.\n\nYou MUST take time to think in between read, write, run, browse, and recall actions.\nYou should never act twice in a row without thinking. But if your last several\nactions are all `think` actions, you should consider taking a different action.\n\nWhat is your next thought or action? Again, you must reply with JSON, and only with JSON.\n\nYou\'re not currently working on any tasks. Your next action MUST be to mark a task as in_progress.\n', 'role': 'user'}]
```
llm response is empty
#### Additional Context
I also tried to use gpt-4 and got the same result.
| null | https://github.com/All-Hands-AI/OpenHands/pull/960 | null | {'base_commit': '707ab7b3f84fb5664ff63da0b52e7b0d2e4df545', 'files': [{'path': 'opendevin/config.py', 'status': 'modified', 'Loc': {"(None, 'get_all', 78)": {'mod': [78, 82]}}}, {'path': 'opendevin/server/agent/manager.py', 'status': 'modified', 'Loc': {"('AgentManager', 'create_controller', 93)": {'mod': [107, 108, 109, 110, 111]}}}, {'path': 'opendevin/server/listen.py', 'status': 'modified', 'Loc': {"(None, 'read_default_model', 114)": {'mod': [115]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"opendevin/config.py",
"opendevin/server/agent/manager.py",
"opendevin/server/listen.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ageitgey | face_recognition | 59f4d299b6ae3232a1d8fe5d5d9652bffa17a728 | https://github.com/ageitgey/face_recognition/issues/809 | facerec_from_webcam_multiprocessing.py run Global is not defined | * face_recognition version: 1.23
* Python version: 3.6.6
* Operating System: windows 10
### Description

### What I Did
```
facerec_from_webcam_multiprocessing.py run Global is not defined. pls fix it, thanks
```
| null | https://github.com/ageitgey/face_recognition/pull/905 | null | {'base_commit': '59f4d299b6ae3232a1d8fe5d5d9652bffa17a728', 'files': [{'path': 'examples/facerec_from_webcam_multiprocessing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 113], 'mod': [3, 125, 130, 131, 154, 189]}, "(None, 'next_id', 17)": {'mod': [17]}, "(None, 'prev_id', 25)": {'mod': [25]}, "(None, 'capture', 33)": {'mod': [33, 43, 47]}, "(None, 'process', 56)": {'mod': [56, 62, 72, 109]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"examples/facerec_from_webcam_multiprocessing.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
pandas-dev | pandas | dc86509b44b3fb0cd9a1a6d6ed564b082dc50848 | https://github.com/pandas-dev/pandas/issues/26139 | Docs
IO HDF5 | Doc for HDFStore compression unclear on what the default value of None does | The doc for the `HDFStore` class mentions:
```
complevel : int, 0-9, default None
Specifies a compression level for data.
A value of 0 disables compression.
```
That doesn't actually answer the question of what compression level is used when the default (None) is used, though. Is None translated further down to 0? it turns out yes, but you have to dig in the code to actually figure that out. And it could as well have been translated eventually to any other value.
Two options:
1. Actually change the default in the `complevel` argument to be "0". (It's an immutable object, so it's fine as a default value for a function argument.)
2. Just adjust the doc in some way.
When the right solution is decided, I can do a pull request with it. Thanks! | null | https://github.com/pandas-dev/pandas/pull/26158 | null | {'base_commit': 'dc86509b44b3fb0cd9a1a6d6ed564b082dc50848', 'files': [{'path': 'pandas/io/pytables.py', 'status': 'modified', 'Loc': {"('HDFStore', None, 401)": {'mod': [425]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"pandas/io/pytables.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
nvbn | thefuck | 0949d2e77022ad69cc07d4b25a858a7e023503ac | https://github.com/nvbn/thefuck/issues/1207 | git push upstream branch does not exist, wrong command recommended first | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
Recently I noticed a change in a `thefuck` behavior that I use very regularly which I wanted to call out as what I think is an unwanted change. This was introduced very recently, I believe with the 3.31 release. When using `git push` on a git repository where the branch does not exist in the upstream repository, `git` responds with a specific command one should run to create the upstream branch. Prior to version 3.31, `thefuck` seemed to recognize this and made the first suggested Corrected Command was the one `git` recommended. As of version 3.31, `thefuck` instead puts a generic `git push --no-verify` command first, and the one `git` recommended is instead the second result.
In this case where `git` recommends a specific command, `git push --no-verify` doesn't actually help or do what the user wants; you need the `git push --set-upstream origin branch-name` command which `thefuck` now arrives at second. Because of the inconvenience for this particular case, combined with the fact that the first option recommended by `thefuck` isn't functionally valid, the prior behavior is more correct for this particular case.
Below is all the debug information requested in the issue template:
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.31 using Python 3.9.5 and ZSH 5.8
Your system (Debian 7, ArchLinux, Windows, etc.):
Arch Linux
How to reproduce the bug:
- In a git repo, create a branch which does not exist in the upstream repository
- Attempt to push the branch with `git push`
- You should see an error message saying "fatal: The current branch branch-name has no upstream branch. To push the current branch and set the remote as upstream, use git push --set-upstream origin branch-name"
- invoke `thefuck`
- Prior to 3.31, `thefuck` would present as the first option the exact command which git tells you to use (git push --set-upstream origin branch-name).
- As of 3.31, `thefuck` instead presents as the first option a more generic `git push --no-verify`, and git's recommended command is the second result.
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
https://pastebin.com/qpyEcreC
If the bug only appears with a specific application, the output of that application and its version:
git version 2.32.0
Anything else you think is relevant:
N/A
<!-- It's only with enough information that we can do something to fix the problem. -->
| null | https://github.com/nvbn/thefuck/pull/1208 | null | {'base_commit': '0949d2e77022ad69cc07d4b25a858a7e023503ac', 'files': [{'path': 'thefuck/rules/git_hook_bypass.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [26]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"thefuck/rules/git_hook_bypass.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | b8a43011e75da4353b0d5ef314c96cb1276f12f0 | https://github.com/scrapy/scrapy/issues/3893 | [Bug] 1.7.1 not support 1.6.0 script | Hello All,
My spider is created by scrapy 1.6.0.
These days, the scrapy updated to 1.7.1, and we found that it cannot support the code build by 1.6.0.
Here is the error:
```
Traceback (most recent call last):
File "/usr/bin/scrapy", line 6, in <module>
from scrapy.cmdline import execute
File "/usr/lib64/python2.7/site-packages/scrapy/cmdline.py", line 10, in <module>
from scrapy.crawler import CrawlerProcess
File "/usr/lib64/python2.7/site-packages/scrapy/crawler.py", line 11, in <module>
from scrapy.core.engine import ExecutionEngine
File "/usr/lib64/python2.7/site-packages/scrapy/core/engine.py", line 14, in <module>
from scrapy.core.scraper import Scraper
File "/usr/lib64/python2.7/site-packages/scrapy/core/scraper.py", line 18, in <module>
from scrapy.core.spidermw import SpiderMiddlewareManager
File "/usr/lib64/python2.7/site-packages/scrapy/core/spidermw.py", line 13, in <module>
from scrapy.utils.conf import build_component_list
File "/usr/lib64/python2.7/site-packages/scrapy/utils/conf.py", line 4, in <module>
import configparser
ImportError: No module named configparser
```
Would you please take time to check the issue?
Appreciate for your help in advance.
Thank you. | null | https://github.com/scrapy/scrapy/pull/3896 | null | {'base_commit': 'b8a43011e75da4353b0d5ef314c96cb1276f12f0', 'files': [{'path': 'scrapy/utils/conf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7], 'mod': [4]}, "(None, 'get_config', 94)": {'mod': [97]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"scrapy/utils/conf.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
All-Hands-AI | OpenHands | 454e9613b0b4c7a9dbb2b8273aff0b36c4d8a2bb | https://github.com/All-Hands-AI/OpenHands/issues/1276 | bug | [Bug]: Browsing is not working | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://github.com/OpenDevin/OpenDevin/blob/main/docs/guides/Troubleshooting.md
- [X] I have checked the existing issues.
### Describe the bug
When I ask a question that requires browsing the web to get the answer, OpenDevin does not use the "browsing" tab.
For instance, I asked
```
Please resolve this pull request: https://github.com/OpenDevin/OpenDevin/issues/1275
```
In trying to resolve the pull request, OpenDevin tried to install Playwright to browse the web instead of using built-in browsing capability, and the browsing tab said "no screenshot available".
### Current Version
```bash
`ghcr.io/opendevin/opendevin:0.3.1`
```
### Installation and Configuration
```bash
export LLM_API_KEY="sk-..."
export WORKSPACE_DIR=$(pwd)/workspace
```
### Model and Agent
_No response_
### Reproduction Steps
1. Ask OpenDevin: `Please resolve this pull request: https://github.com/OpenDevin/OpenDevin/issues/1275`
### Logs, Errors, Screenshots, and Additional Context
It seems that this is a relevant error:
```
STEP 2
23:12:20 - opendevin:INFO: agent_controller.py:89
PLAN
Resolve this pull request: https://github.com/OpenDevin/OpenDevin/issues/1275
23:12:26 - opendevin:INFO: agent_controller.py:107
ACTION
BrowseURLAction(url='https://github.com/OpenDevin/OpenDevin/pull/1275', action=<ActionType.BROWSE: 'browse'>)
huggingface/tokenizers: The current process just got forked, after parallelism has already been used. Disabling parallelism to avoid deadlocks...
To disable this warning, you can either:
- Avoid using `tokenizers` before the fork if possible
- Explicitly set the environment variable TOKENIZERS_PARALLELISM=(true | false)
23:12:27 - opendevin:INFO: agent_controller.py:160
OBSERVATION
BrowserType.launch: Executable doesn't exist at /root/.cache/ms-playwright/chromium-1112/chrome-linux/chrome
╔════════════════════════════════════════════════════════════╗
║ Looks like Playwright was just installed or updated. ║
║ Please run the following command to download new browsers: ║
║ ║
║ playwright install ║
║ ║
║ <3 Playwright Team ║
╚════════════════════════════════════════════════════════════╝
``` | null | https://github.com/All-Hands-AI/OpenHands/pull/1184 | null | {'base_commit': '454e9613b0b4c7a9dbb2b8273aff0b36c4d8a2bb', 'files': [{'path': 'containers/app/Dockerfile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [50]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"containers/app/Dockerfile"
],
"asset": []
} | 1 |
fastapi | fastapi | 92c825be6a7362099400c9c3fe8b01ea13add3dc | https://github.com/fastapi/fastapi/issues/8 | feature
answered
reviewed | Nesting FastAPI instances doesn't work very well | Do this:
main_app = FastAPI()
sub_api = FastAPI()
...
main_app.router.routes.append(Mount('/subapi', app=sub_api))
`sub_api` will correctly serve ever `/subapi` -- docs, methods, all that. However, the docs will still look for `/openapi.json` (absolute link) when trying to load the openapi spec. Additionally, the spec will not be adjusted to have the correct links, relative to where the module is mounted.
Perhaps this is a corner use case, but a lot of apps might have different collections of routes mounted in different subpaths. | null | https://github.com/fastapi/fastapi/pull/26 | null | {'base_commit': '92c825be6a7362099400c9c3fe8b01ea13add3dc', 'files': [{'path': 'fastapi/applications.py', 'status': 'modified', 'Loc': {"('FastAPI', '__init__', 20)": {'add': [27, 45]}, "('FastAPI', 'openapi', 61)": {'add': [68]}, "('FastAPI', 'setup', 72)": {'mod': [83, 91]}}}, {'path': 'fastapi/openapi/utils.py', 'status': 'modified', 'Loc': {"(None, 'get_openapi', 212)": {'mod': [218, 237]}}}, {'path': 'mkdocs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [59]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"fastapi/openapi/utils.py",
"fastapi/applications.py"
],
"doc": [
"mkdocs.yml"
],
"test": [],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 9f1b9dbf60f406e8d6205402b8ac078195cd0c01 | https://github.com/localstack/localstack/issues/4517 | type: bug
status: triage needed
aws:cloudformation
aws:iam | bug: AWS::NoValue produces error when used in IAM policy template | ### Is there an existing issue for this?
- [x] I have searched the existing issues
### Current Behavior
When I try to create a role with S3 resource and I use `!Ref AWS::NoValue` for its resource, it fails with errors. It is supposed to be removed from array entry, but it looks like it evaluates as `__aws_no_value__`, which then fails to validate because the value is not in acceptable format for ARN.
(Message: `Resource __aws_no_value__ must be in ARN format or "*".`)
template file `test.template` :
```
AWSTemplateFormatVersion: 2010-09-09
Conditions:
someCondition: false
Resources:
SomeRole:
Type: AWS::IAM::Role
Properties:
RoleName: SomeRole
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
Action:
- sts:AssumeRole
Policies:
- PolicyName: SomePolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- s3:GetObject
- s3:GetObjectVersion
Resource:
- arn:aws:s3:::some-prefix-*/*
- !If
- someCondition
- !Ref arn:aws:s3:::another-prefix-*/*
- !Ref AWS::NoValue
```
Executed command:
```
awslocal cloudformation deploy \
--no-fail-on-empty-changeset \
--capabilities CAPABILITY_NAMED_IAM \
--template-file test.template \
--stack-name "test-stack"
```
Error log produced:
```
2021-08-30T07:38:54:DEBUG:localstack.utils.cloudformation.template_deployer: Error applying changes for CloudFormation stack "test-resources-iam": An error occurred (MalformedPolicyDocument) when calling the PutRolePolicy operation: Resource __aws_no_value__ must be in ARN format or "*". Traceback (most recent call last):
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 2083, in _run
self.do_apply_changes_in_loop(changes, stack, stack_name)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 2154, in do_apply_changes_in_loop
self.apply_change(change, stack, new_resources, stack_name=stack_name)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 2218, in apply_change
result = deploy_resource(resource_id, new_resources, stack_name)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1037, in deploy_resource
return execute_resource_action(resource_id, resources, stack_name, ACTION_CREATE)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1152, in execute_resource_action
resource_id, resources, resource_type, func, stack_name, action_name
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1314, in configure_resource_via_sdk
run_post_create_actions(action_name, resource_id, resources, resource_type, stack_name, result)
File "/opt/code/localstack/localstack/utils/cloudformation/template_deployer.py", line 1414, in run_post_create_actions
PolicyDocument=doc,
File "/opt/code/localstack/.venv/lib/python3.7/site-packages/botocore/client.py", line 386, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/opt/code/localstack/.venv/lib/python3.7/site-packages/botocore/client.py", line 705, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.MalformedPolicyDocumentException: An error occurred (MalformedPolicyDocument) when calling the PutRolePolicy operation: Resource __aws_no_value__ must be in ARN format or "*".
```
### Expected Behavior
Create stack without failing
### How are you starting LocalStack?
With the `localstack` script
### Steps To Reproduce
#### How are you starting localstack (e.g., `bin/localstack` command, arguments, or `docker-compose.yml`)
```
FORCE_NONINTERACTIVE=1 \
SERVICES=iam,s3,lambda,cloudformation \
localstack infra start &
```
#### Client commands (e.g., AWS SDK code snippet, or sequence of "awslocal" commands)
```
awslocal cloudformation deploy \
--no-fail-on-empty-changeset \
--capabilities CAPABILITY_NAMED_IAM \
--template-file test.template \
--stack-name "test-stack"
```
### Environment
```markdown
- OS: Ubuntu 20.04
- LocalStack: latest
```
### Anything else?
_No response_ | null | https://github.com/localstack/localstack/pull/6760 | null | {'base_commit': '9f1b9dbf60f406e8d6205402b8ac078195cd0c01', 'files': [{'path': 'localstack/services/cloudformation/models/cloudwatch.py', 'status': 'modified', 'Loc': {"('CloudWatchAlarm', None, 6)": {'add': [11]}}}, {'path': 'localstack/services/cloudformation/models/iam.py', 'status': 'modified', 'Loc': {"('IAMRole', '_post_create', 278)": {'add': [314]}, "('IAMManagedPolicy', '_create', 46)": {'mod': [51]}}}, {'path': 'tests/integration/cloudformation/test_cloudformation_iam.py', 'status': 'modified', 'Loc': {"(None, 'test_iam_user_access_key', 156)": {'add': [174]}, '(None, None, None)': {'mod': [4, 8, 10, 13, 14, 15, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]}, "(None, 'test_delete_role_detaches_role_policy', 18)": {'mod': [29, 30, 31, 32, 33, 34, 36, 37, 38, 39, 40, 42, 43, 45, 46, 47, 48, 50, 51, 52, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 74, 75, 76, 78, 79, 80]}, "(None, 'test_policy_attachments', 83)": {'mod': [110]}}}, {'path': 'tests/integration/cloudformation/test_cloudformation_iam.snapshot.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}}}, {'path': 'tests/integration/templates/iam_policy_attachments.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [18]}}}, {'path': 'tests/integration/templates/iam_role_policy.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [12, 19, 20]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/services/cloudformation/models/cloudwatch.py",
"tests/integration/cloudformation/test_cloudformation_iam.snapshot.json",
"localstack/services/cloudformation/models/iam.py"
],
"doc": [],
"test": [
"tests/integration/cloudformation/test_cloudformation_iam.py"
],
"config": [
"tests/integration/templates/iam_role_policy.yaml",
"tests/integration/templates/iam_policy_attachments.yaml"
],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | c4c1203fc07b2e23c3e5a5e9277266a711ab9466 | https://github.com/AntonOsika/gpt-engineer/issues/117 | bug | GPT Engineer will not save individual files when given specs that result in many files. | The generated code goes into the logfile however it would be more useful if the tool could make all those files automatically. | null | https://github.com/AntonOsika/gpt-engineer/pull/120 | null | {'base_commit': 'c4c1203fc07b2e23c3e5a5e9277266a711ab9466', 'files': [{'path': 'gpt_engineer/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "(None, 'parse_chat', 6)": {'add': [11], 'mod': [6, 7, 8, 10, 13, 14, 15, 16, 17, 18, 19]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"gpt_engineer/chat_to_files.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 40e623b2768598e36c4f367bd166b36fffceb3f6 | https://github.com/scrapy/scrapy/issues/6177 | enhancement
docs | Switch to the latest sphinx | The docs fail to build with the current Sphinx (7.2.6):
```
reading sources... [ 48%] topics/downloader-middleware
Extension error (scrapydocs):
Handler <function collect_scrapy_settings_refs at 0x7f81fc663a60> for event 'doctree-read' threw an exception (exception: Next node is not a target)
```
So we should update deps in docs/requirements.txt, fix this (and maybe others) problem and make sure the docs are built correctly. | null | https://github.com/scrapy/scrapy/pull/6200 | null | {'base_commit': '40e623b2768598e36c4f367bd166b36fffceb3f6', 'files': [{'path': 'docs/requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 2, 3, 4]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"docs/requirements.txt"
],
"test": [],
"config": [],
"asset": []
} | 1 |
geekan | MetaGPT | 0958cc333ee13d1ce5216ae0bdeaa53b5eacc6ea | https://github.com/geekan/MetaGPT/issues/1059 | bug | ReadTimeout when using local LLM | **Bug description**
When hosting the following model; https://huggingface.co/oobabooga/CodeBooga-34B-v0.1 locally using LMStudio 0.2.14 on Linux Mint 21.3 Cinnamon I am sometimes (usually after several iterations when the context gets large) confronted with a ReadTimeout.
MetaGPT main branch, commit id: adb42f4, it reports version: 0.7.4 with pip show metagpt. Used Python 3.9.18.
I used the following code to try out MetaGPT
```
import asyncio
from metagpt.roles.di.data_interpreter import DataInterpreter
async def main(requirement: str = ""):
di = DataInterpreter()
await di.run(requirement)
if __name__ == "__main__":
requirement = "Create a dnd 5th edition graph displaying xp per level based on information from a reputable source determined by Googling. First write results in a CSV and validate the CSV contains multiple records. If the file does not contain records, determine if you can fix the code or whether you need to look at another source. After the CSV files is filled with records, create the graph based on this."
asyncio.run(main(requirement))
```
I got the below exception
```
Traceback (most recent call last):
File "metagpt/lib/python3.9/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "metagpt/lib/python3.9/site-packages/httpx/_transports/default.py", line 254, in __aiter__
async for part in self._httpcore_stream:
File "metagpt/lib/python3.9/site-packages/httpcore/_async/connection_pool.py", line 367, in __aiter__
raise exc from None
File "metagpt/lib/python3.9/site-packages/httpcore/_async/connection_pool.py", line 363, in __aiter__
async for part in self._stream:
File "metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py", line 349, in __aiter__
raise exc
File "metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py", line 341, in __aiter__
async for chunk in self._connection._receive_response_body(**kwargs):
File "metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py", line 210, in _receive_response_body
event = await self._receive_event(timeout=timeout)
File "metagpt/lib/python3.9/site-packages/httpcore/_async/http11.py", line 224, in _receive_event
data = await self._network_stream.read(
File "metagpt/lib/python3.9/site-packages/httpcore/_backends/anyio.py", line 36, in read
return b""
File "3.9.18/lib/python3.9/contextlib.py", line 137, in __exit__
self.gen.throw(typ, value, traceback)
File "metagpt/lib/python3.9/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ReadTimeout
```
**Bug solved method**
It would be nice if the timeout and retries are configurable to avoid this issue (for example like AutoGen does this in the LLM API configuration). N.b. I've tried larger local models in the past (for which disk swapping was required due to memory constraints). Those models can sometimes take more than an hour to respond. The model for which this bug is registered can fit in my CPU RAM (64Gb). | null | https://github.com/geekan/MetaGPT/pull/1060 | null | {'base_commit': '0958cc333ee13d1ce5216ae0bdeaa53b5eacc6ea', 'files': [{'path': 'config/config2.example.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6]}}}, {'path': 'metagpt/actions/action_node.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, "('ActionNode', '_aask_v1', 411)": {'mod': [419]}, "('ActionNode', None, 122)": {'mod': [451]}, "('ActionNode', 'fill', 468)": {'mod': [476]}}}, {'path': 'metagpt/configs/llm_config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12]}, "('LLMConfig', 'check_llm_key', 87)": {'add': [90]}, "('LLMConfig', None, 38)": {'mod': [77]}}}, {'path': 'metagpt/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [134], 'mod': [126]}}}, {'path': 'metagpt/provider/anthropic_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "('AnthropicLLM', None, 14)": {'mod': [44, 49, 50, 52]}}}, {'path': 'metagpt/provider/base_llm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25]}, "('BaseLLM', 'with_model', 257)": {'add': [260]}, "('BaseLLM', 'aask', 127)": {'mod': [133, 149]}, "('BaseLLM', None, 32)": {'mod': [155, 165, 169, 173, 184, 194]}, "('BaseLLM', 'aask_batch', 155)": {'mod': [161]}, "('BaseLLM', 'acompletion_text', 194)": {'mod': [197, 198]}}}, {'path': 'metagpt/provider/dashscope_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27]}, "('DashScopeLLM', None, 152)": {'mod': [205, 211, 212, 214]}}}, {'path': 'metagpt/provider/general_api_base.py', 'status': 'modified', 'Loc': {"('APIRequestor', 'arequest_raw', 556)": {'mod': [576]}}}, {'path': 'metagpt/provider/google_gemini_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}, "('GeminiLLM', None, 41)": {'mod': [126, 132, 133, 135]}}}, {'path': 'metagpt/provider/human_provider.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}, "('HumanProvider', None, 13)": {'mod': [21, 38, 41, 45, 48]}, "('HumanProvider', 'aask', 28)": {'mod': [34, 36]}}}, {'path': 'metagpt/provider/ollama_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8]}, "('OllamaLLM', None, 17)": {'mod': [53, 65, 66, 68]}, "('OllamaLLM', '_achat_completion', 53)": {'mod': [58]}, "('OllamaLLM', '_achat_completion_stream', 68)": {'mod': [74]}}}, {'path': 'metagpt/provider/openai_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27]}, "('OpenAILLM', None, 43)": {'mod': [77, 107, 121, 122, 127, 128, 137, 154]}, "('OpenAILLM', '_achat_completion_stream', 77)": {'mod': [79]}, "('OpenAILLM', '_cons_kwargs', 107)": {'mod': [115]}, "('OpenAILLM', 'acompletion_text', 137)": {'mod': [142]}, "('OpenAILLM', '_achat_completion_function', 145)": {'mod': [146, 149]}}}, {'path': 'metagpt/provider/qianfan_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}, "('QianFanLLM', None, 23)": {'mod': [110, 115, 116, 118]}}}, {'path': 'metagpt/provider/spark_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, "('SparkLLM', None, 26)": {'mod': [34, 37, 43, 46]}}}, {'path': 'metagpt/provider/zhipuai_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "('ZhiPuAILLM', None, 26)": {'mod': [48, 54, 60, 61, 63]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [37]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"metagpt/configs/llm_config.py",
"metagpt/provider/zhipuai_api.py",
"metagpt/provider/ollama_api.py",
"metagpt/provider/spark_api.py",
"metagpt/provider/anthropic_api.py",
"metagpt/provider/qianfan_api.py",
"metagpt/actions/action_node.py",
"metagpt/provider/openai_api.py",
"metagpt/provider/google_gemini_api.py",
"metagpt/provider/dashscope_api.py",
"metagpt/provider/base_llm.py",
"metagpt/provider/general_api_base.py",
"metagpt/const.py",
"metagpt/provider/human_provider.py"
],
"doc": [],
"test": [],
"config": [
"config/config2.example.yaml",
"requirements.txt"
],
"asset": []
} | 1 |
Textualize | rich | b5f0b743a7f50c72199eb792cd6e70730b60651f | https://github.com/Textualize/rich/issues/2047 | Needs triage | [BUG] printing -\n- in rich.progress context manager will kill the jupyter. | try this code in the jupyter notebook:
```python
from rich.progress import Progress
with Progress() as progress:
print("-\n-")
print("finished")
```
and it will show a popup message displaying that the kernel has died.
I have tested it on google colab and mint.
also, I have installed rich using
```
pip install rich[jupyter]
``` | null | https://github.com/Textualize/rich/pull/2209 | null | {'base_commit': 'b5f0b743a7f50c72199eb792cd6e70730b60651f', 'files': [{'path': 'CHANGELOG.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}}}, {'path': 'rich/file_proxy.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2]}, "('FileProxy', 'flush', 50)": {'mod': [51, 52, 53, 54]}}}, {'path': 'tests/test_file_proxy.py', 'status': 'modified', 'Loc': {"(None, 'test_flush', 20)": {'add': [27]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"rich/file_proxy.py"
],
"doc": [
"CHANGELOG.md"
],
"test": [
"tests/test_file_proxy.py"
],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | 439c19596a248a31cd1aa8220f54a622a0322160 | https://github.com/scikit-learn/scikit-learn/issues/3689 | using sparse matrix in fit_params | When the value of a fit_params is sparse matrix, it will raise error from the following code.
sklearn/cross_validation.py
```
1224 if hasattr(v, '__len__') and len(v) == n_samples else v)
1225 for k, v in fit_params.items()])
```
It is because the `__len__` of sparse matrix is defined as
scipy/sparse/base.py
```
190 def __len__(self):
191 # return self.getnnz()
192 raise TypeError("sparse matrix length is ambiguous; use getnnz()"
193 " or shape[0]")
```
Is there anyway to circumpass this issue. I do not want to convert the sparse matrix into a dense one, since it will consume a big memory.
| null | https://github.com/scikit-learn/scikit-learn/pull/4049 | null | {'base_commit': '439c19596a248a31cd1aa8220f54a622a0322160', 'files': [{'path': 'sklearn/cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1073]}, "(None, '_fit_and_predict', 1150)": {'mod': [1186, 1188, 1189, 1190]}, "(None, '_fit_and_score', 1305)": {'mod': [1379, 1381, 1382, 1383, 1384, 1385]}}}, {'path': 'sklearn/tests/test_cross_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1108]}, "(None, 'assert_fit_params', 595)": {'mod': [596]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/cross_validation.py"
],
"doc": [],
"test": [
"sklearn/tests/test_cross_validation.py"
],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | adc1e590d4dc1e230b49a4c10b4cd7b672bb3d69 | https://github.com/scikit-learn/scikit-learn/issues/9174 | Bug
help wanted | SVC and OneVsOneClassifier decision_function inconsistent on sub-sample | Hi,
I'm seeing inconsistent numerical results with SVC's decision_function.
When estimated over an entire batch of samples ( (n_samples, n_features) matrix ) compared to analyzing sample-by-sample, the results are not the same.
This is true for both the individual numerical values per sample and the overall distribution of the results.
**The model is SVC with RBF kernel, for a 3-class classification:**
```
SVC(C=1.0, gamma=0.007, class_weight = new_class_weight, probability = True, random_state = 30,
decision_function_shape = 'ovr')
```
**The models are loaded from file:**
`ML = joblib.load("model.pkl")`
**Option A, analyze a matrix:**
`distances = ML.decision_function(X)`
**Option B, analyze individual samples:**
```
distances = numpy.zeros([X.shape[0], 3])
for i in range(X.shape[0]):
distances[i,:]` = ML.decision_function(X[i,:].reshape(1,-1))
```
**Output for first two samples:**
**Option A:**
sample 1: [ 0.90835588, -0.17305875, 2.26470288]
sample 2: [ 1.10437313, -0.2371539 , 2.13278077]
**Option B:**
sample 1: [ 0.82689247, -0.32689247, 2.5 ]
sample 2: [ 1.22005359, -0.5 , 2.27994641]
I couldn't find any indication for this behavior in the documentation.
Windows-10-10.0.15063-SP0
Python 3.5.2 |Anaconda 4.2.0 (64-bit)| (default, Jul 5 2016, 11:41:13) [MSC v.1900 64 bit (AMD64)]
NumPy 1.12.1
SciPy 0.18.1
Scikit-Learn 0.18.1
Thanks!
| null | https://github.com/scikit-learn/scikit-learn/pull/10440 | null | {'base_commit': 'adc1e590d4dc1e230b49a4c10b4cd7b672bb3d69', 'files': [{'path': 'doc/modules/multiclass.rst', 'status': 'modified', 'Loc': {'(None, None, 230)': {'mod': [230]}}}, {'path': 'doc/modules/svm.rst', 'status': 'modified', 'Loc': {'(None, None, 116)': {'mod': [116]}, '(None, None, 118)': {'mod': [118]}}}, {'path': 'doc/whats_new/v0.21.rst', 'status': 'modified', 'Loc': {'(None, None, 26)': {'add': [26]}, '(None, None, 353)': {'add': [353]}}}, {'path': 'sklearn/svm/base.py', 'status': 'modified', 'Loc': {"('BaseSVC', 'decision_function', 527)": {'add': [549]}}}, {'path': 'sklearn/utils/estimator_checks.py', 'status': 'modified', 'Loc': {"(None, 'check_methods_subset_invariance', 815)": {'mod': [839, 840]}}}, {'path': 'sklearn/utils/multiclass.py', 'status': 'modified', 'Loc': {"(None, '_ovr_decision_function', 402)": {'mod': [434, 435, 437, 438, 440, 444, 445, 446, 447]}}}, {'path': 'sklearn/utils/tests/test_multiclass.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18, 25]}, "(None, 'test_safe_split_with_precomputed_kernel', 361)": {'add': [380]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/utils/multiclass.py",
"sklearn/utils/estimator_checks.py",
"sklearn/svm/base.py"
],
"doc": [
"doc/whats_new/v0.21.rst",
"doc/modules/multiclass.rst",
"doc/modules/svm.rst"
],
"test": [
"sklearn/utils/tests/test_multiclass.py"
],
"config": [],
"asset": []
} | null |
scrapy | scrapy | da90449edfa13b5be1550b3acc212dbf3a8c6e69 | https://github.com/scrapy/scrapy/issues/1064 | allow spiders to return dicts instead of Items | In many cases the requirement to define and yield Items from a spider is an unnecessary complication.
An example from Scrapy tutorial:
```
import scrapy
class DmozItem(scrapy.Item):
title = scrapy.Field()
link = scrapy.Field()
desc = scrapy.Field()
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
for sel in response.xpath('//ul/li'):
item = DmozItem()
item['title'] = sel.xpath('a/text()').extract()
item['link'] = sel.xpath('a/@href').extract()
item['desc'] = sel.xpath('text()').extract()
yield item
```
It can be made simpler with dicts instead of Items:
```
import scrapy
class DmozSpider(scrapy.Spider):
name = "dmoz"
allowed_domains = ["dmoz.org"]
start_urls = [
"http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
"http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
]
def parse(self, response):
for sel in response.xpath('//ul/li'):
yield {
'title': sel.xpath('a/text()').extract(),
'link': sel.xpath('a/@href').extract(),
'desc': sel.xpath('text()').extract(),
}
```
The version with dicts gives a developer less concepts to learn, and it is easier to explain.
When field metadata is not used and data is exported to JSON/XML yielding Python dicts should be enough. Even when you export to CSV dicts could be enough - columns can be set explicitly by an user.
This should also prevent tickets like https://github.com/scrapy/scrapy/issues/968.
| null | https://github.com/scrapy/scrapy/pull/1081 | null | {'base_commit': 'da90449edfa13b5be1550b3acc212dbf3a8c6e69', 'files': [{'path': 'docs/index.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [61, 86], 'mod': [59, 75, 76]}}}, {'path': 'docs/topics/architecture.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [105, 108]}}}, {'path': 'docs/topics/exporters.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [199, 205], 'mod': [10, 93, 94, 95, 170, 171]}}}, {'path': 'docs/topics/images.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [66, 67, 68]}}}, {'path': 'docs/topics/item-pipeline.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [137], 'mod': [11, 12, 31, 32, 36, 158, 159]}}}, {'path': 'docs/topics/items.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13], 'mod': [11, 12, 16, 67]}}}, {'path': 'docs/topics/practices.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [187, 189, 190, 192, 193, 194, 196, 199, 201, 202, 204]}}}, {'path': 'docs/topics/signals.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74, 94]}}}, {'path': 'docs/topics/spider-middleware.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [93, 100, 101, 113]}}}, {'path': 'docs/topics/spiders.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [284, 290], 'mod': [27, 28, 44, 46, 47, 49, 50, 51, 52, 54, 55, 57, 59, 61, 63, 64, 66, 67, 68, 69, 71, 72, 74, 76, 77, 79, 80, 81, 82, 84, 85, 87, 89, 90, 91, 92, 98, 99, 106, 107, 201, 202, 203, 204, 234, 251, 252, 271, 274]}}}, {'path': 'scrapy/commands/parse.py', 'status': 'modified', 'Loc': {"('Command', 'run_callback', 106)": {'mod': [110]}}}, {'path': 'scrapy/contracts/default.py', 'status': 'modified', 'Loc': {"('ReturnsContract', None, 21)": {'mod': [38, 39]}, "('ScrapesContract', 'post_process', 84)": {'mod': [86]}}}, {'path': 'scrapy/contrib/exporter/__init__.py', 'status': 'modified', 'Loc': {"('BaseItemExporter', '_get_serialized_fields', 52)": {'mod': [53, 54, 59, 67, 68, 69, 72]}, "('CsvItemExporter', '_write_headers_and_set_fields_to_export', 191)": {'mod': [194]}}}, {'path': 'scrapy/contrib/pipeline/files.py', 'status': 'modified', 'Loc': {"('FilesPipeline', 'item_completed', 269)": {'mod': [270]}}}, {'path': 'scrapy/contrib/pipeline/images.py', 'status': 'modified', 'Loc': {"('ImagesPipeline', 'item_completed', 111)": {'mod': [112]}}}, {'path': 'scrapy/core/scraper.py', 'status': 'modified', 'Loc': {"('Scraper', '_process_spidermw_output', 171)": {'mod': [177, 186]}}}, {'path': 'tests/spiders.py', 'status': 'modified', 'Loc': {"('ItemSpider', 'parse', 84)": {'add': [87]}}}, {'path': 'tests/test_commands.py', 'status': 'modified', 'Loc': {"('RunSpiderCommandTest', 'test_runspider', 132)": {'add': [137], 'mod': [139, 141]}, '(None, None, None)': {'add': [241]}, "('ParseCommandTest', 'setUp', 188)": {'mod': [195, 196, 198, 204]}}}, {'path': 'tests/test_contracts.py', 'status': 'modified', 'Loc': {"('TestSpider', None, 25)": {'add': [41, 48, 56, 64]}, "('ContractsManagerTest', 'test_returns', 104)": {'add': [112]}, "('ContractsManagerTest', None, 72)": {'add': [122]}, "('ContractsManagerTest', 'test_scrapes', 123)": {'add': [131, 136]}}}, {'path': 'tests/test_contrib_exporter.py', 'status': 'modified', 'Loc': {"('BaseItemExporterTest', None, 18)": {'add': [45], 'mod': [36]}, "('XmlItemExporterTest', None, 196)": {'add': [213]}, '(None, None, None)': {'add': [327], 'mod': [1, 5, 9, 10, 11]}, "('BaseItemExporterTest', 'test_export_item', 36)": {'mod': [39]}, "('BaseItemExporterTest', 'test_serialize_field', 46)": {'mod': [47, 48, 49, 50]}, "('PythonItemExporterTest', 'test_nested_item', 79)": {'mod': [81]}, "('CsvItemExporterTest', None, 140)": {'mod': [153, 154, 155]}, "('CsvItemExporterTest', 'test_header', 153)": {'mod': [157, 158, 159, 161, 162, 163, 164, 165, 166, 168, 169, 170, 171, 172, 173, 174, 176, 177, 178, 179, 181]}, "('CsvItemExporterTest', 'test_join_multivalue', 183)": {'mod': [188, 189, 190, 191, 192, 193, 194]}, "('XmlItemExporterTest', 'test_multivalued_fields', 218)": {'mod': [219, 220, 221, 222, 223, 224, 225, 226]}, "('XmlItemExporterTest', 'test_nested_item', 228)": {'mod': [229, 231, 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 248]}, "('XmlItemExporterTest', 'test_nested_list_item', 250)": {'mod': [251, 253, 255, 256, 257, 258, 259, 260, 261, 262, 263, 264, 265, 266, 267]}, "('JsonLinesItemExporterTest', 'test_nested_item', 281)": {'mod': [283]}, "('JsonItemExporterTest', None, 298)": {'mod': [309]}, "('JsonItemExporterTest', 'test_two_items', 309)": {'mod': [311, 312, 315]}, "('CustomItemExporter', 'serialize_field', 332)": {'mod': [336, 337]}, "('CustomItemExporterTest', 'test_exporter_custom_serializer', 330)": {'mod': [342, 343, 344, 345]}}}, {'path': 'tests/test_engine.py', 'status': 'modified', 'Loc': {"('TestSpider', None, 36)": {'add': [43]}, '(None, None, None)': {'add': [67]}, "('TestSpider', 'parse_item', 51)": {'mod': [52]}, "('CrawlerRun', None, 81)": {'mod': [84]}, "('CrawlerRun', '__init__', 84)": {'mod': [91, 92]}, "('EngineTest', 'test_crawler', 154)": {'mod': [155, 156, 157, 158, 159, 160, 161, 162]}}}, {'path': 'tests/test_pipeline_files.py', 'status': 'modified', 'Loc': {"('FilesPipelineTestCaseFields', 'test_item_fields_default', 144)": {'mod': [145, 150, 151, 152, 153, 154, 155, 156, 157]}, "('FilesPipelineTestCaseFields', 'test_item_fields_override_settings', 159)": {'mod': [160, 165, 166, 167, 168, 169, 170, 171, 172, 173]}}}, {'path': 'tests/test_pipeline_images.py', 'status': 'modified', 'Loc': {"('ImagesPipelineTestCaseFields', 'test_item_fields_default', 170)": {'mod': [171, 176, 177, 178, 179, 180, 181, 182, 183]}, "('ImagesPipelineTestCaseFields', 'test_item_fields_override_settings', 185)": {'mod': [186, 191, 192, 193, 194, 195, 196, 197, 198, 199]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/core/scraper.py",
"scrapy/commands/parse.py",
"scrapy/contracts/default.py",
"scrapy/contrib/exporter/__init__.py",
"tests/spiders.py",
"scrapy/contrib/pipeline/images.py",
"scrapy/contrib/pipeline/files.py"
],
"doc": [
"docs/topics/practices.rst",
"docs/topics/signals.rst",
"docs/topics/spiders.rst",
"docs/topics/architecture.rst",
"docs/topics/items.rst",
"docs/index.rst",
"docs/topics/exporters.rst",
"docs/topics/item-pipeline.rst",
"docs/topics/images.rst",
"docs/topics/spider-middleware.rst"
],
"test": [
"tests/test_contracts.py",
"tests/test_engine.py",
"tests/test_commands.py",
"tests/test_pipeline_files.py",
"tests/test_pipeline_images.py",
"tests/test_contrib_exporter.py"
],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | effd75dda5f4afa61f988035ff8fe4b3a447464e | https://github.com/scikit-learn/scikit-learn/issues/10059 | Duplicated input points silently create duplicated clusters in KMeans | #### Description
When there are duplicated input points to Kmeans resulting to number of unique points < number of requested clusters, there is no error thrown. Instead, clustering continues to (seemingly) produce the number of clusters requested, but some of them are exactly the same, so the cluster labels produced for the input points do not go all the way to number of requested clusters.
#### Steps/Code to Reproduce
```python
from sklearn.cluster import KMeans
import numpy as np
# some input points here are identical, so that n_total=17, n_unique=9
x2d = np.array([(1086, 348), (1087, 347), (1190, 244), (1190, 244), (1086, 348), (1185, 249), (1193, 241), (1185, 249), (1087, 347), (1188, 247), (1187, 233), (26, 111), (26, 111), (26, 110), (26, 110), (26, 110), (26, 110)])
kmeans = KMeans(n_clusters=10) # n_clusters > n_unique
c_labels = kmeans.fit_predict(x2d)
c_centers = kmeans.cluster_centers_
```
#### Expected Results
Either an error thrown, or the cluster labels produced should match the unique clusters only (i.e. no identical cluster centres)
#### Actual Results
```python
>>> c_labels # note there's no entry for cluster 9
array([7, 2, 6, 6, 7, 5, 4, 5, 2, 1, 3, 8, 8, 0, 0, 0, 0], dtype=int32)
>>> c_centers # two of these 10 clusters have identical centers, so only 9 of them are unique
array([[ 26., 110.],
[ 1188., 247.],
[ 1087., 347.],
[ 1187., 233.],
[ 1193., 241.],
[ 1185., 249.],
[ 1190., 244.],
[ 1086., 348.],
[ 26., 111.],
[ 26., 110.]])
```
#### Versions
```python
Darwin-16.7.0-x86_64-i386-64bit
Python 3.6.1 |Continuum Analytics, Inc.| (default, May 11 2017, 13:04:09)
[GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)]
NumPy 1.13.1
SciPy 0.19.1
Scikit-Learn 0.18.2
``` | null | https://github.com/scikit-learn/scikit-learn/pull/10099 | null | {'base_commit': 'effd75dda5f4afa61f988035ff8fe4b3a447464e', 'files': [{'path': 'doc/whats_new/v0.20.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [136]}}}, {'path': 'sklearn/cluster/k_means_.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [34]}, "(None, 'k_means', 167)": {'add': [376]}}}, {'path': 'sklearn/cluster/tests/test_k_means.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17, 20]}, "(None, 'test_sparse_validate_centers', 855)": {'add': [869]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": "code"
} | {
"code": [
"sklearn/cluster/k_means_.py"
],
"doc": [
"doc/whats_new/v0.20.rst"
],
"test": [
"sklearn/cluster/tests/test_k_means.py"
],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | 0dad0fce72266aa7b38b536f87bab26e7f233c74 | https://github.com/scrapy/scrapy/issues/4477 | bug | is_generator_with_return_value raises IndentationError with a flush left doc string | ### Description
Code that is accepted by the python interpreter raises when fed through `textwrap.dedent`
### Steps to Reproduce
1. Create `is_generator_bug.py` with the content below (which I simplified from [the `is_generator_with_return_value` method body](https://github.com/scrapy/scrapy/blob/2.0.1/scrapy/utils/misc.py#L186-L187)
2. Run `python is_generator_bug.py`
3. Observe the kaboom
```python
import ast
import inspect
from textwrap import dedent
class Bob:
def doit(self):
"""
this line is flush left
"""
if True:
yield 1234
if __name__ == '__main__':
b = Bob()
c = b.doit
if inspect.isgeneratorfunction(c):
tree = ast.parse(dedent(inspect.getsource(c)))
```
**Expected behavior:** [What you expect to happen]
No Error
**Actual behavior:** [What actually happens]
```console
$ python3.7 is_generator_bug.py
Traceback (most recent call last):
File "is_generator_bug.py", line 16, in <module>
tree = ast.parse(dedent(inspect.getsource(c)))
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/ast.py", line 35, in parse
return compile(source, filename, mode, PyCF_ONLY_AST)
File "<unknown>", line 1
def doit(self):
^
IndentationError: unexpected indent
```
**Reproduces how often:** [What percentage of the time does it reproduce?]
100%
### Versions
```
Scrapy : 2.0.1
lxml : 4.5.0.0
libxml2 : 2.9.10
cssselect : 1.1.0
parsel : 1.5.2
w3lib : 1.21.0
Twisted : 20.3.0
Python : 3.7.7 (default, Mar 11 2020, 23:30:22) - [Clang 10.0.0 (clang-1000.11.45.5)]
pyOpenSSL : 19.1.0 (OpenSSL 1.1.1d 10 Sep 2019)
cryptography : 2.8
Platform : Darwin-17.7.0-x86_64-i386-64bit
```
### Additional context
| null | https://github.com/scrapy/scrapy/pull/4935 | null | {'base_commit': '0dad0fce72266aa7b38b536f87bab26e7f233c74', 'files': [{'path': 'scrapy/utils/misc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [12]}, "(None, 'is_generator_with_return_value', 217)": {'mod': [230]}, "(None, 'warn_on_generator_with_return_value', 240)": {'mod': [245, 247, 248, 249, 250, 251]}}}, {'path': 'tests/test_utils_misc/test_return_with_argument_inside_generator.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1], 'mod': [3]}, "('UtilsMiscPy3TestCase', None, 6)": {'mod': [8, 9]}, "('UtilsMiscPy3TestCase', 'test_generators_with_return_statements', 8)": {'mod': [13, 17, 21, 25, 28, 32, 40, 41, 43, 44, 49, 50, 51, 52, 53, 54, 55, 56]}, "('UtilsMiscPy3TestCase', 'g', 13)": {'mod': [15]}, "('UtilsMiscPy3TestCase', 'k', 28)": {'mod': [30]}, "('UtilsMiscPy3TestCase', 'n', 40)": {'mod': [46, 47]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/utils/misc.py"
],
"doc": [],
"test": [
"tests/test_utils_misc/test_return_with_argument_inside_generator.py"
],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | e217b68fd00bb7c54b81a492ee6f9db6498517fa | https://github.com/scikit-learn/scikit-learn/issues/18146 | Bug | Something goes wrong with KernelPCA with 32 bits input data | When given 32 bits input, KernelPCA succeed to transform the data into a 17-dimensional feature space while the original space was 3 features. I did not debug yet but this seems really unlikely.
```python
# %%
from sklearn.datasets import make_blobs
from sklearn.preprocessing import StandardScaler
X, y = make_blobs(
n_samples=30,
centers=[[0, 0, 0], [1, 1, 1]],
random_state=0,
cluster_std=0.1
)
X = StandardScaler().fit_transform(X)
X -= X.min()
# %%
import numpy as np
from sklearn.decomposition import KernelPCA
kpca = KernelPCA()
print(kpca.fit_transform(X).shape)
print(kpca.fit_transform(X.astype(np.float32)).shape)
``` | null | https://github.com/scikit-learn/scikit-learn/pull/18149 | null | {'base_commit': 'e217b68fd00bb7c54b81a492ee6f9db6498517fa', 'files': [{'path': 'doc/whats_new/v0.24.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [118], 'mod': [25, 26]}}}, {'path': 'sklearn/decomposition/tests/test_kernel_pca.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12]}, "(None, 'test_kernel_pca_inverse_transform', 290)": {'add': [297]}}}, {'path': 'sklearn/utils/validation.py', 'status': 'modified', 'Loc': {"(None, '_check_psd_eigenvalues', 1093)": {'mod': [1186]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/utils/validation.py"
],
"doc": [
"doc/whats_new/v0.24.rst"
],
"test": [
"sklearn/decomposition/tests/test_kernel_pca.py"
],
"config": [],
"asset": []
} | 1 |
localstack | localstack | 6c8f52d42563c1207a8cb3fbbfccb6d4af2a0670 | https://github.com/localstack/localstack/issues/544 | priority: high | S3 object metadata not saved when uploaded with presigned url | Use case:
I'm enabling users to directly upload to s3 using presigned url. S3 is configured to add event to SQS on Put. Queue consumer, reads the queue and makes HEAD requests with object keys to get the metadata and save information to database (generic image upload, so I know where to add file).
Test script in node js - some ugly code here (to install deps run `npm install aws-sdk request`):
```js
const AWS = require("aws-sdk");
const request = require("request");
let s3 = new AWS.S3({
endpoint: "http://localhost:4572",
s3ForcePathStyle: true,
accessKeyId: "",
secretAccessKey: "",
region: "us-west-1"
});
var bucket = "bucketest";
var key = "test.txt";
s3.createBucket({Bucket: bucket}, function (err, data) {
if (err) {
console.error(err.message);
// ignore, probably there is bucket already
}
var params = {
Bucket: bucket,
Key: key,
Metadata: {
venue: "123"
}
};
s3.getSignedUrl('putObject', params, function (err, url) {
if (err) {
console.error('Presigning post data encountered an error', err);
} else {
console.log('==== URL: ', url);
var body = new Buffer('Test data.');
request.put({ url, body, method: "PUT" }, function(err, resp, body) {
if (err) {
console.log('======= error:', error);
return;
}
console.log(body);
s3.headObject({Bucket: bucket, Key: key}, function (err, data) {
if (err) console.log("====== error1:", err, err.stack);
else console.log("==== HEAD RESPONSE", data);
});
})
}
});
});
```
Output:
```
==== URL: http://localhost:4572/heaps-test/test.txt?AWSAccessKeyId=somekey&Expires=1515503310&Signature=TgK3B33p2kwCWs5F5KtaZ3fxgXA%3D&x-amz-meta-venue=123
<PutObjectResponse xmlns="http://s3.amazonaws.com/doc/2006-03-01"><PutObjectResponse><ETag>"56dd8a439abf97fda051f88f09f00d65"</ETag><LastModified>2018-01-09T12:53:30.637Z</LastModified></PutObjectResponse></PutObjectResponse>
==== HEAD RESPONSE { LastModified: 2018-01-09T12:53:30.000Z,
ContentLength: 10,
ETag: '"56dd8a439abf97fda051f88f09f00d65"',
ContentType: 'text/html; charset=utf-8',
Metadata: {} }
```
Expected Output (tested with live AWS):
```
==== URL: https://heaps-test.s3.eu-west-1.amazonaws.com/test.txt?AWSAccessKeyId=somekey&Expires=1515503234&Signature=enc17C6glTsVtOiGobugz5NELIc%3D&x-amz-meta-venue=123
==== HEAD RESPONSE { AcceptRanges: 'bytes',
LastModified: 2018-01-09T12:52:15.000Z,
ContentLength: 10,
ETag: '"56dd8a439abf97fda051f88f09f00d65"',
ContentType: 'binary/octet-stream',
Metadata: { venue: '123' } }
```
As you can see Metadata is empty when using localstack
| null | https://github.com/localstack/localstack/pull/1745 | null | {'base_commit': '6c8f52d42563c1207a8cb3fbbfccb6d4af2a0670', 'files': [{'path': 'localstack/services/s3/s3_listener.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [44, 308]}, "('ProxyListenerS3', 'forward_request', 514)": {'add': [563]}, "('ProxyListenerS3', 'return_response', 595)": {'mod': [665]}}}, {'path': 'tests/integration/test_s3.py', 'status': 'modified', 'Loc': {"('S3ListenerTest', None, 30)": {'add': [187]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"localstack/services/s3/s3_listener.py"
],
"doc": [],
"test": [
"tests/integration/test_s3.py"
],
"config": [],
"asset": []
} | 1 |
Significant-Gravitas | AutoGPT | a2723f16f2d5c748c382359c6ce5fdd1e53728d3 | https://github.com/Significant-Gravitas/AutoGPT/issues/1639 | function: process text | This model's maximum context length is 8191 tokens, however you requested 89686 tokens (89686 in your prompt) | ### Duplicates
- [X] I have searched the existing issues
### Steps to reproduce 🕹
The program is trying to process an absurd amount of information at once. It happens over and over again.
Adding chunk 17 / 20 to memory
SYSTEM: Command browse_website returned: Error: This model's maximum context length is 8191 tokens, however you requested 89686 tokens (89686 in your prompt;
0 for the completion). Please reduce your prompt; or completion length.
### Current behavior 😯
_No response_
### Expected behavior 🤔
_No response_
### Your prompt 📝
```yaml
# Paste your prompt here
```
| null | https://github.com/Significant-Gravitas/AutoGPT/pull/2542 | null | {'base_commit': 'a2723f16f2d5c748c382359c6ce5fdd1e53728d3', 'files': [{'path': '.env.template', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [154], 'mod': [10, 11]}}}, {'path': 'autogpt/config/config.py', 'status': 'modified', 'Loc': {"('Config', '__init__', 19)": {'mod': [34]}}}, {'path': 'autogpt/processing/text.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 5]}, "(None, 'summarize_text', 44)": {'add': [60, 78], 'mod': [65, 77, 81, 85, 97]}, "(None, 'split_text', 14)": {'mod': [14, 27, 28, 31, 32, 33, 34, 36, 37, 38, 41]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"autogpt/processing/text.py",
"autogpt/config/config.py"
],
"doc": [],
"test": [],
"config": [
"requirements.txt",
".env.template"
],
"asset": []
} | 1 |
ansible | ansible | 141d638e590897d4ec5371c4868f027dad95a38e | https://github.com/ansible/ansible/issues/36691 | module
affects_2.4
support:core
docs | stat documentation: mime_type vs mimetype, mime output vs descriptive output | <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and devel branch are affected too.
-->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
stat
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
ansible 2.4.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/oliver/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
##### CONFIGURATION
ansible.cfg:
```
[defaults]
inventory = ./hosts
roles_path = ./roles
remote_user = oliver
nocows = 1
vault_password_file = ./get_vault_password_from_keyring.py
gathering = smart
fact_caching = jsonfile
fact_caching_timeout = 21600
fact_caching_connection = ./cache
[privilege_escalation]
become = True
become_method = sudo
[ssh_connection]
pipelining = True
```
##### OS / ENVIRONMENT
Running Ansible on Ubuntu 16.04.4 x86_64; target is the same machine.
Tried with and without python-magic module installed, but didn't observe any different behaviour.
##### SUMMARY
<!--- Explain the problem briefly -->
Documentation for stat module (http://docs.ansible.com/ansible/latest/stat_module.html) mentions that a "mime_type" entry will be set if get_mime is set to true, with example content being "PDF document, version 1.2". I couldn't get this result; rather:
- a "mimetype" entry is set (ie. no underscore)
- the mimetype entry contains the actual mime type (eg. "application/pdf") rather than a description (eg. "PDF document, version 1.2")
##### STEPS TO REPRODUCE
```yaml
- name: Get mime type of test file
stat: path="/home/oliver/mozilla.pdf"
register: my_stat_check
- debug:
msg: "{{ my_stat_check.stat.mimetype }}"
- debug:
msg: "{{ my_stat_check.stat.mime_type }}"
```
##### EXPECTED RESULTS
I expected the first debug message to fail, and expected the second one to succeed and print "PDF document, version 1.2".
Alternatively, the documentation should state that "mimetype" will be set, and will contain the technical mime type rather than a description.
Though admittedly I'd prefer to also get the descriptive type output, since eg. for swap files the mime type is always "application/octet-stream" (so a swap file is indistinguishable from any other binary file); while the descriptive type is something like "Linux/i386 swap file (new style)" which is more useful.
##### ACTUAL RESULTS
At the moment, the first debug message will work and will print "application/pdf". The second debug message will fail with "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'mime_type'".
```
TASK [tools : Get mime type of test file] *****************************************************************************************************************************************************************************************************
ok: [myhost] => {"changed": false, "stat": {"atime": 1519571973.3959274, "attr_flags": "e", "attributes": ["extents"], "block_size": 4096, "blocks": 352, "charset": "binary", "checksum": "2d9eb9f17601726c56bd0c4fbc770430d0ac2277", "ctime": 1519484291.234419, "dev": 2098, "device_type": 0, "executable": false, "exists": true, "gid": 1000, "gr_name": "oliver", "inode": 12592144, "isblk": false, "ischr": false, "isdir": false, "isfifo": false, "isgid": false, "islnk": false, "isreg": true, "issock": false, "isuid": false, "md5": "760bf09b20afc699ad5cb4cabf3a151a", "mimetype": "application/pdf", "mode": "0664", "mtime": 1519484291.234419, "nlink": 1, "path": "/home/oliver/mozilla.pdf", "pw_name": "oliver", "readable": true, "rgrp": true, "roth": true, "rusr": true, "size": 180168, "uid": 1000, "version": "18446744071694658390", "wgrp": true, "woth": false, "writeable": true, "wusr": true, "xgrp": false, "xoth": false, "xusr": false}}
TASK [tools : debug] **************************************************************************************************************************************************************************************************************************
ok: [myhost] => {
"msg": "application/pdf"
}
TASK [tools : debug] **************************************************************************************************************************************************************************************************************************
fatal: [myhost]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'mime_type'\n\nThe error appears to have been in '/home/oliver/devel/myhost/ansible/roles/tools/tasks/main.yml': line 40, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- debug:\n ^ here\n\nexception type: <class 'ansible.errors.AnsibleUndefinedVariable'>\nexception: 'dict object' has no attribute 'mime_type'"}
to retry, use: --limit @/home/oliver/devel/myhost/ansible/playbooks/desktop.retry
```
| null | https://github.com/ansible/ansible/pull/36693 | null | {'base_commit': '141d638e590897d4ec5371c4868f027dad95a38e', 'files': [{'path': 'lib/ansible/modules/files/stat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [318, 324]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/files/stat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | ba1b3db70907b975b5ca52b9957c5ed7a186a0fa | https://github.com/huggingface/transformers/issues/12990 | kindly adding some documentations on t5-v1_1-base"" | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version:
- Platform:
- Python version:
- PyTorch version (GPU?):
- Tensorflow version (GPU?):
- Using GPU in script?:
- Using distributed or parallel set-up in script?:
### Who can help
<!-- Your issue will be replied to more quickly if you can figure out the right person to tag with @
If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**.
Please tag fewer than 3 people.
Models:
- albert, bert, xlm: @LysandreJik
- blenderbot, bart, marian, pegasus, encoderdecoder, t5: @patrickvonplaten, @patil-suraj
- longformer, reformer, transfoxl, xlnet: @patrickvonplaten
- fsmt: @stas00
- funnel: @sgugger
- gpt2: @patrickvonplaten, @LysandreJik
- rag: @patrickvonplaten, @lhoestq
- tensorflow: @Rocketknight1
Library:
- benchmarks: @patrickvonplaten
- deepspeed: @stas00
- ray/raytune: @richardliaw, @amogkam
- text generation: @patrickvonplaten
- tokenizers: @LysandreJik
- trainer: @sgugger
- pipelines: @LysandreJik
Documentation: @sgugger
Model hub:
- for issues with a model report at https://discuss.huggingface.co/ and tag the model's creator.
HF projects:
- datasets: [different repo](https://github.com/huggingface/datasets)
- rust tokenizers: [different repo](https://github.com/huggingface/tokenizers)
Examples:
- maintained examples (not research project or legacy): @sgugger, @patil-suraj
- research_projects/bert-loses-patience: @JetRunner
- research_projects/distillation: @VictorSanh
-->
Documentation: @sgugger
Hi
Could you kindly add some documentations on "t5-v1_1-base"? I tested one code with t5-base and t5-v1 version, for t5-v1 I got memory issue, this seems to me the model size is different and larger, also fast tokenizer for this model does not work, could you kindly add a documentation on these differences?
thanks a lot.
| null | https://github.com/huggingface/transformers/pull/13240 | null | {'base_commit': 'ba1b3db70907b975b5ca52b9957c5ed7a186a0fa', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [274]}}}, {'path': 'docs/source/index.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [610], 'mod': [285, 288, 291, 295, 298, 301, 303, 306, 310, 313]}}}, {'path': 'docs/source/model_doc/byt5.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [41], 'mod': [43]}}}, {'path': 'docs/source/model_doc/mt5.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [30]}}}, {'path': 'docs/source/model_doc/t5.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [53, 102], 'mod': [16, 17, 45, 46, 47, 48, 49, 58, 59, 60, 61, 62, 66, 75, 77, 78, 79, 81, 82, 83, 84, 88, 89, 90, 94, 95, 96, 98, 99, 100, 101]}}}, {'path': 'src/transformers/models/t5/modeling_flax_t5.py', 'status': 'modified', 'Loc': {"('FlaxT5PreTrainedModel', 'encode', 1044)": {'add': [1063], 'mod': [1062, 1066]}, "('FlaxT5PreTrainedModel', 'decode', 1101)": {'add': [1120, 1123], 'mod': [1122, 1126, 1133]}, '(None, None, None)': {'add': [1333, 1621], 'mod': [1332, 1620, 1624, 1628]}, "('FlaxT5ForConditionalGeneration', 'decode', 1452)": {'add': [1471, 1474], 'mod': [1473, 1477, 1484]}}}, {'path': 'src/transformers/models/t5/modeling_t5.py', 'status': 'modified', 'Loc': {"('T5Model', 'forward', 1317)": {'add': [1348], 'mod': [1347]}, "('T5ForConditionalGeneration', 'forward', 1506)": {'add': [1539, 1547], 'mod': [1541, 1546]}, '(None, None, None)': {'mod': [1237]}}}, {'path': 'src/transformers/models/t5/modeling_tf_t5.py', 'status': 'modified', 'Loc': {"('TFT5Model', 'call', 1105)": {'add': [1137], 'mod': [1136]}, "('TFT5ForConditionalGeneration', 'call', 1290)": {'add': [1323], 'mod': [1325, 1330, 1332]}, "('TFT5EncoderModel', 'call', 1557)": {'mod': [1574]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/models/t5/modeling_flax_t5.py",
"src/transformers/models/t5/modeling_t5.py",
"src/transformers/models/t5/modeling_tf_t5.py"
],
"doc": [
"docs/source/model_doc/t5.rst",
"docs/source/index.rst",
"docs/source/model_doc/mt5.rst",
"README.md",
"docs/source/model_doc/byt5.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
nvbn | thefuck | 8fa10b1049ddf21f188b9605bcd5afbe33bf33db | https://github.com/nvbn/thefuck/issues/975 | enhancement
hacktoberfest | Correcting `app-install` to `apt-get install` rather than `install` | <!-- If you have any issue with The Fuck, sorry about that, but we will do what we
can to fix that. Actually, maybe we already have, so first thing to do is to
update The Fuck and see if the bug is still there. -->
<!-- If it is (sorry again), check if the problem has not already been reported and
if not, just open an issue on [GitHub](https://github.com/nvbn/thefuck) with
the following basic information: -->
The output of `thefuck --version` (something like `The Fuck 3.1 using Python
3.5.0 and Bash 4.4.12(1)-release`):
The Fuck 3.29 using Python 3.6.8 and Bash 4.4.20(1)-release
Your system (Debian 7, ArchLinux, Windows, etc.):
Ubuntu 18.04.3 LTS
How to reproduce the bug:
~$ sudo apt-install python
fuck
The output of The Fuck with `THEFUCK_DEBUG=true` exported (typically execute `export THEFUCK_DEBUG=true` in your shell before The Fuck):
```
DEBUG: Run with settings: {'alter_history': True,
'debug': True,
'env': {'GIT_TRACE': '1', 'LANG': 'C', 'LC_ALL': 'C'},
'exclude_rules': [],
'history_limit': None,
'instant_mode': False,
'no_colors': False,
'num_close_matches': 3,
'priority': {},
'repeat': False,
'require_confirmation': True,
'rules': [<const: All rules enabled>],
'slow_commands': ['lein', 'react-native', 'gradle', './gradlew', 'vagrant'],
'user_dir': PosixPath('/home/user/.config/thefuck'),
'wait_command': 3,
'wait_slow_command': 15}
DEBUG: Received output: sudo: apt-install: command not found
DEBUG: Call: sudo apt-install python; with env: {'CLUTTER_IM_MODULE': 'xim', 'LS_COLORS': 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:', 'LESSCLOSE': '/usr/bin/lesspipe %s %s', 'XDG_MENU_PREFIX': 'gnome-', 'LANG': 'C', 'DISPLAY': ':0', 'GNOME_SHELL_SESSION_MODE': 'ubuntu', 'COLORTERM': 'truecolor', 'TF_SHELL_ALIASES': 'alias alert=\'notify-send --urgency=low -i "$([ $? = 0 ] && echo terminal || echo error)" "$(history|tail -n1|sed -e \'\\\'\'s/^\\s*[0-9]\\+\\s*//;s/[;&|]\\s*alert$//\'\\\'\')"\'\nalias egrep=\'egrep --color=auto\'\nalias fgrep=\'fgrep --color=auto\'\nalias grep=\'grep --color=auto\'\nalias l=\'ls -CF\'\nalias la=\'ls -A\'\nalias ll=\'ls -alF\'\nalias ls=\'ls --color=auto\'', 'DESKTOP_AUTOSTART_ID': '10e5aedf3552f69d7a157076635571365600000035090007', 'USERNAME': 'user', 'XDG_VTNR': '2', 'PYTHONIOENCODING': 'utf-8', 'SSH_AUTH_SOCK': '/run/user/1000/keyring/ssh', 'MANDATORY_PATH': '/usr/share/gconf/ubuntu.mandatory.path', 'XDG_SESSION_ID': '2', 'USER': 'user', 'DESKTOP_SESSION': 'ubuntu', 'QT4_IM_MODULE': 'xim', 'TEXTDOMAINDIR': '/usr/share/locale/', 'GNOME_TERMINAL_SCREEN': '/org/gnome/Terminal/screen/988562f2_716d_4bc1_9825_43d1608e1ccb', 'TF_SHELL': 'bash', 'DEFAULTS_PATH': '/usr/share/gconf/ubuntu.default.path', 'PWD': '/home/user', 'HOME': '/home/user', 'TEXTDOMAIN': 'im-config', 'SSH_AGENT_PID': '3588', 'QT_ACCESSIBILITY': '1', 'XDG_SESSION_TYPE': 'x11', 'XDG_DATA_DIRS': '/usr/share/ubuntu:/usr/local/share:/usr/share:/var/lib/snapd/desktop', 'XDG_SESSION_DESKTOP': 'ubuntu', 'GTK_MODULES': 'gail:atk-bridge', 'WINDOWPATH': '2', 'TERM': 'xterm-256color', 'SHELL': '/bin/bash', 'VTE_VERSION': '5202', 'QT_IM_MODULE': 'ibus', 'XMODIFIERS': '@im=ibus', 'IM_CONFIG_PHASE': '2', 'XDG_CURRENT_DESKTOP': 'ubuntu:GNOME', 'GPG_AGENT_INFO': '/run/user/1000/gnupg/S.gpg-agent:0:1', 'TF_ALIAS': 'fuck', 'GNOME_TERMINAL_SERVICE': ':1.82', 'XDG_SEAT': 'seat0', 'SHLVL': '1', 'LANGUAGE': 'en_IL:en', 'GDMSESSION': 'ubuntu', 'GNOME_DESKTOP_SESSION_ID': 'this-is-deprecated', 'LOGNAME': 'user', 'DBUS_SESSION_BUS_ADDRESS': 'unix:path=/run/user/1000/bus', 'XDG_RUNTIME_DIR': '/run/user/1000', 'XAUTHORITY': '/run/user/1000/gdm/Xauthority', 'TF_HISTORY': '\t apt-install brew\n\t apt-get install brew\n\t fuck\n\t sudo apt-install python\n\t sudo install python\n\t thefuck --version\n\t adb_release -a\n\t lsb_release -a\n\t export THEFUCK_DEBUG=true\n\t sudo apt-install python', 'XDG_CONFIG_DIRS': '/etc/xdg/xdg-ubuntu:/etc/xdg', 'PATH': '/home/user/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'THEFUCK_DEBUG': 'true', 'SESSION_MANAGER': 'local/virt-lnx:@/tmp/.ICE-unix/3509,unix/virt-lnx:/tmp/.ICE-unix/3509', 'LESSOPEN': '| /usr/bin/lesspipe %s', 'GTK_IM_MODULE': 'ibus', '_': '/usr/local/bin/thefuck', 'LC_ALL': 'C', 'GIT_TRACE': '1'}; is slow: took: 0:00:00.008240
DEBUG: Importing rule: adb_unknown_command; took: 0:00:00.000389
DEBUG: Importing rule: ag_literal; took: 0:00:00.000628
DEBUG: Importing rule: apt_get; took: 0:00:00.014902
DEBUG: Importing rule: apt_get_search; took: 0:00:00.000415
DEBUG: Importing rule: apt_invalid_operation; took: 0:00:00.000915
DEBUG: Importing rule: apt_list_upgradable; took: 0:00:00.000459
DEBUG: Importing rule: apt_upgrade; took: 0:00:00.000436
DEBUG: Importing rule: aws_cli; took: 0:00:00.000384
DEBUG: Importing rule: az_cli; took: 0:00:00.000309
DEBUG: Importing rule: brew_cask_dependency; took: 0:00:00.000625
DEBUG: Importing rule: brew_install; took: 0:00:00.000120
DEBUG: Importing rule: brew_link; took: 0:00:00.000283
DEBUG: Importing rule: brew_reinstall; took: 0:00:00.000605
DEBUG: Importing rule: brew_uninstall; took: 0:00:00.000291
DEBUG: Importing rule: brew_unknown_command; took: 0:00:00.000142
DEBUG: Importing rule: brew_update_formula; took: 0:00:00.000292
DEBUG: Importing rule: brew_upgrade; took: 0:00:00.000112
DEBUG: Importing rule: cargo; took: 0:00:00.000098
DEBUG: Importing rule: cargo_no_command; took: 0:00:00.000292
DEBUG: Importing rule: cat_dir; took: 0:00:00.000322
DEBUG: Importing rule: cd_correction; took: 0:00:00.001288
DEBUG: Importing rule: cd_mkdir; took: 0:00:00.000479
DEBUG: Importing rule: cd_parent; took: 0:00:00.000114
DEBUG: Importing rule: chmod_x; took: 0:00:00.000108
DEBUG: Importing rule: composer_not_command; took: 0:00:00.000309
DEBUG: Importing rule: cp_omitting_directory; took: 0:00:00.000570
DEBUG: Importing rule: cpp11; took: 0:00:00.000311
DEBUG: Importing rule: dirty_untar; took: 0:00:00.001544
DEBUG: Importing rule: dirty_unzip; took: 0:00:00.001127
DEBUG: Importing rule: django_south_ghost; took: 0:00:00.000117
DEBUG: Importing rule: django_south_merge; took: 0:00:00.000106
DEBUG: Importing rule: dnf_no_such_command; took: 0:00:00.000930
DEBUG: Importing rule: docker_login; took: 0:00:00.000350
DEBUG: Importing rule: docker_not_command; took: 0:00:00.000597
DEBUG: Importing rule: dry; took: 0:00:00.000107
DEBUG: Importing rule: fab_command_not_found; took: 0:00:00.000416
DEBUG: Importing rule: fix_alt_space; took: 0:00:00.000284
DEBUG: Importing rule: fix_file; took: 0:00:00.003212
DEBUG: Importing rule: gem_unknown_command; took: 0:00:00.000493
DEBUG: Importing rule: git_add; took: 0:00:00.000547
DEBUG: Importing rule: git_add_force; took: 0:00:00.000371
DEBUG: Importing rule: git_bisect_usage; took: 0:00:00.000382
DEBUG: Importing rule: git_branch_delete; took: 0:00:00.000293
DEBUG: Importing rule: git_branch_exists; took: 0:00:00.000364
DEBUG: Importing rule: git_branch_list; took: 0:00:00.000304
DEBUG: Importing rule: git_checkout; took: 0:00:00.000835
DEBUG: Importing rule: git_commit_amend; took: 0:00:00.000306
DEBUG: Importing rule: git_commit_reset; took: 0:00:00.000281
DEBUG: Importing rule: git_diff_no_index; took: 0:00:00.000288
DEBUG: Importing rule: git_diff_staged; took: 0:00:00.000275
DEBUG: Importing rule: git_fix_stash; took: 0:00:00.000285
DEBUG: Importing rule: git_flag_after_filename; took: 0:00:00.000278
DEBUG: Importing rule: git_help_aliased; took: 0:00:00.000278
DEBUG: Importing rule: git_merge; took: 0:00:00.000272
DEBUG: Importing rule: git_merge_unrelated; took: 0:00:00.000264
DEBUG: Importing rule: git_not_command; took: 0:00:00.000326
DEBUG: Importing rule: git_pull; took: 0:00:00.000277
DEBUG: Importing rule: git_pull_clone; took: 0:00:00.000272
DEBUG: Importing rule: git_pull_uncommitted_changes; took: 0:00:00.000270
DEBUG: Importing rule: git_push; took: 0:00:00.000273
DEBUG: Importing rule: git_push_different_branch_names; took: 0:00:00.000268
DEBUG: Importing rule: git_push_force; took: 0:00:00.000270
DEBUG: Importing rule: git_push_pull; took: 0:00:00.000273
DEBUG: Importing rule: git_push_without_commits; took: 0:00:00.000352
DEBUG: Importing rule: git_rebase_merge_dir; took: 0:00:00.000279
DEBUG: Importing rule: git_rebase_no_changes; took: 0:00:00.000187
DEBUG: Importing rule: git_remote_delete; took: 0:00:00.000264
DEBUG: Importing rule: git_remote_seturl_add; took: 0:00:00.000192
DEBUG: Importing rule: git_rm_local_modifications; took: 0:00:00.000297
DEBUG: Importing rule: git_rm_recursive; took: 0:00:00.000324
DEBUG: Importing rule: git_rm_staged; took: 0:00:00.000276
DEBUG: Importing rule: git_stash; took: 0:00:00.000271
DEBUG: Importing rule: git_stash_pop; took: 0:00:00.000273
DEBUG: Importing rule: git_tag_force; took: 0:00:00.000265
DEBUG: Importing rule: git_two_dashes; took: 0:00:00.000273
DEBUG: Importing rule: go_run; took: 0:00:00.000287
DEBUG: Importing rule: gradle_no_task; took: 0:00:00.000598
DEBUG: Importing rule: gradle_wrapper; took: 0:00:00.000341
DEBUG: Importing rule: grep_arguments_order; took: 0:00:00.000293
DEBUG: Importing rule: grep_recursive; took: 0:00:00.000296
DEBUG: Importing rule: grunt_task_not_found; took: 0:00:00.000514
DEBUG: Importing rule: gulp_not_task; took: 0:00:00.000295
DEBUG: Importing rule: has_exists_script; took: 0:00:00.000275
DEBUG: Importing rule: heroku_multiple_apps; took: 0:00:00.000287
DEBUG: Importing rule: heroku_not_command; took: 0:00:00.000285
DEBUG: Importing rule: history; took: 0:00:00.000116
DEBUG: Importing rule: hostscli; took: 0:00:00.000655
DEBUG: Importing rule: ifconfig_device_not_found; took: 0:00:00.000415
DEBUG: Importing rule: java; took: 0:00:00.000295
DEBUG: Importing rule: javac; took: 0:00:00.000282
DEBUG: Importing rule: lein_not_task; took: 0:00:00.000615
DEBUG: Importing rule: ln_no_hard_link; took: 0:00:00.000452
DEBUG: Importing rule: ln_s_order; took: 0:00:00.000435
DEBUG: Importing rule: long_form_help; took: 0:00:00.000128
DEBUG: Importing rule: ls_all; took: 0:00:00.000304
DEBUG: Importing rule: ls_lah; took: 0:00:00.000298
DEBUG: Importing rule: man; took: 0:00:00.000308
DEBUG: Importing rule: man_no_space; took: 0:00:00.000105
DEBUG: Importing rule: mercurial; took: 0:00:00.000275
DEBUG: Importing rule: missing_space_before_subcommand; took: 0:00:00.000117
DEBUG: Importing rule: mkdir_p; took: 0:00:00.000281
DEBUG: Importing rule: mvn_no_command; took: 0:00:00.000317
DEBUG: Importing rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000287
DEBUG: Importing rule: no_command; took: 0:00:00.000282
DEBUG: Importing rule: no_such_file; took: 0:00:00.000114
DEBUG: Importing rule: npm_missing_script; took: 0:00:00.000624
DEBUG: Importing rule: npm_run_script; took: 0:00:00.000367
DEBUG: Importing rule: npm_wrong_command; took: 0:00:00.000446
DEBUG: Importing rule: open; took: 0:00:00.000362
DEBUG: Importing rule: pacman; took: 0:00:00.000502
DEBUG: Importing rule: pacman_not_found; took: 0:00:00.000116
DEBUG: Importing rule: path_from_history; took: 0:00:00.000126
DEBUG: Importing rule: php_s; took: 0:00:00.000314
DEBUG: Importing rule: pip_install; took: 0:00:00.000375
DEBUG: Importing rule: pip_unknown_command; took: 0:00:00.000362
DEBUG: Importing rule: port_already_in_use; took: 0:00:00.000196
DEBUG: Importing rule: prove_recursively; took: 0:00:00.000283
DEBUG: Importing rule: pyenv_no_such_command; took: 0:00:00.000610
DEBUG: Importing rule: python_command; took: 0:00:00.000301
DEBUG: Importing rule: python_execute; took: 0:00:00.000275
DEBUG: Importing rule: quotation_marks; took: 0:00:00.000099
DEBUG: Importing rule: react_native_command_unrecognized; took: 0:00:00.000368
DEBUG: Importing rule: remove_trailing_cedilla; took: 0:00:00.000103
DEBUG: Importing rule: rm_dir; took: 0:00:00.000283
DEBUG: Importing rule: rm_root; took: 0:00:00.000373
DEBUG: Importing rule: scm_correction; took: 0:00:00.000289
DEBUG: Importing rule: sed_unterminated_s; took: 0:00:00.000282
DEBUG: Importing rule: sl_ls; took: 0:00:00.000098
DEBUG: Importing rule: ssh_known_hosts; took: 0:00:00.000281
DEBUG: Importing rule: sudo; took: 0:00:00.000105
DEBUG: Importing rule: sudo_command_from_user_path; took: 0:00:00.000280
DEBUG: Importing rule: switch_lang; took: 0:00:00.000145
DEBUG: Importing rule: systemctl; took: 0:00:00.000448
DEBUG: Importing rule: test.py; took: 0:00:00.000098
DEBUG: Importing rule: tmux; took: 0:00:00.000332
DEBUG: Importing rule: touch; took: 0:00:00.000405
DEBUG: Importing rule: tsuru_login; took: 0:00:00.000333
DEBUG: Importing rule: tsuru_not_command; took: 0:00:00.000317
DEBUG: Importing rule: unknown_command; took: 0:00:00.000107
DEBUG: Importing rule: unsudo; took: 0:00:00.000098
DEBUG: Importing rule: vagrant_up; took: 0:00:00.000282
DEBUG: Importing rule: whois; took: 0:00:00.000441
DEBUG: Importing rule: workon_doesnt_exists; took: 0:00:00.000371
DEBUG: Importing rule: yarn_alias; took: 0:00:00.000272
DEBUG: Importing rule: yarn_command_not_found; took: 0:00:00.000719
DEBUG: Importing rule: yarn_command_replaced; took: 0:00:00.000386
DEBUG: Importing rule: yarn_help; took: 0:00:00.000285
DEBUG: Trying rule: path_from_history; took: 0:00:00.000518
DEBUG: Trying rule: dry; took: 0:00:00.000080
DEBUG: Trying rule: git_stash_pop; took: 0:00:00.000024
DEBUG: Trying rule: test.py; took: 0:00:00.000002
DEBUG: Trying rule: adb_unknown_command; took: 0:00:00.000015
DEBUG: Trying rule: ag_literal; took: 0:00:00.000015
DEBUG: Trying rule: apt_get; took: 0:00:00.000358
DEBUG: Trying rule: apt_get_search; took: 0:00:00.000020
DEBUG: Trying rule: apt_invalid_operation; took: 0:00:00.000068
DEBUG: Trying rule: apt_list_upgradable; took: 0:00:00.000059
DEBUG: Trying rule: apt_upgrade; took: 0:00:00.000019
DEBUG: Trying rule: aws_cli; took: 0:00:00.000015
DEBUG: Trying rule: az_cli; took: 0:00:00.000014
DEBUG: Trying rule: brew_link; took: 0:00:00.000016
DEBUG: Trying rule: brew_reinstall; took: 0:00:00.000013
DEBUG: Trying rule: brew_uninstall; took: 0:00:00.000012
DEBUG: Trying rule: brew_update_formula; took: 0:00:00.000013
DEBUG: Trying rule: cargo; took: 0:00:00.000002
DEBUG: Trying rule: cargo_no_command; took: 0:00:00.000015
DEBUG: Trying rule: cat_dir; took: 0:00:00.000015
DEBUG: Trying rule: cd_correction; took: 0:00:00.000055
DEBUG: Trying rule: cd_mkdir; took: 0:00:00.000017
DEBUG: Trying rule: cd_parent; took: 0:00:00.000002
DEBUG: Trying rule: chmod_x; took: 0:00:00.000003
DEBUG: Trying rule: composer_not_command; took: 0:00:00.000014
DEBUG: Trying rule: cp_omitting_directory; took: 0:00:00.000053
DEBUG: Trying rule: cpp11; took: 0:00:00.000015
DEBUG: Trying rule: dirty_untar; took: 0:00:00.000014
DEBUG: Trying rule: dirty_unzip; took: 0:00:00.000013
DEBUG: Trying rule: django_south_ghost; took: 0:00:00.000003
DEBUG: Trying rule: django_south_merge; took: 0:00:00.000002
DEBUG: Trying rule: docker_login; took: 0:00:00.000014
DEBUG: Trying rule: docker_not_command; took: 0:00:00.000053
DEBUG: Trying rule: fab_command_not_found; took: 0:00:00.000014
DEBUG: Trying rule: fix_alt_space; took: 0:00:00.000008
DEBUG: Trying rule: fix_file; took: 0:00:00.000009
DEBUG: Trying rule: gem_unknown_command; took: 0:00:00.000018
DEBUG: Trying rule: git_add; took: 0:00:00.000013
DEBUG: Trying rule: git_add_force; took: 0:00:00.000011
DEBUG: Trying rule: git_bisect_usage; took: 0:00:00.000012
DEBUG: Trying rule: git_branch_delete; took: 0:00:00.000011
DEBUG: Trying rule: git_branch_exists; took: 0:00:00.000011
DEBUG: Trying rule: git_branch_list; took: 0:00:00.000011
DEBUG: Trying rule: git_checkout; took: 0:00:00.000011
DEBUG: Trying rule: git_commit_amend; took: 0:00:00.000010
DEBUG: Trying rule: git_commit_reset; took: 0:00:00.000014
DEBUG: Trying rule: git_diff_no_index; took: 0:00:00.000011
DEBUG: Trying rule: git_diff_staged; took: 0:00:00.000011
DEBUG: Trying rule: git_fix_stash; took: 0:00:00.000011
DEBUG: Trying rule: git_flag_after_filename; took: 0:00:00.000011
DEBUG: Trying rule: git_help_aliased; took: 0:00:00.000011
DEBUG: Trying rule: git_merge; took: 0:00:00.000011
DEBUG: Trying rule: git_merge_unrelated; took: 0:00:00.000011
DEBUG: Trying rule: git_not_command; took: 0:00:00.000011
DEBUG: Trying rule: git_pull; took: 0:00:00.000014
DEBUG: Trying rule: git_pull_clone; took: 0:00:00.000011
DEBUG: Trying rule: git_pull_uncommitted_changes; took: 0:00:00.000011
DEBUG: Trying rule: git_push; took: 0:00:00.000011
DEBUG: Trying rule: git_push_different_branch_names; took: 0:00:00.000011
DEBUG: Trying rule: git_push_pull; took: 0:00:00.000011
DEBUG: Trying rule: git_push_without_commits; took: 0:00:00.000011
DEBUG: Trying rule: git_rebase_merge_dir; took: 0:00:00.000011
DEBUG: Trying rule: git_rebase_no_changes; took: 0:00:00.000011
DEBUG: Trying rule: git_remote_delete; took: 0:00:00.000014
DEBUG: Trying rule: git_remote_seturl_add; took: 0:00:00.000011
DEBUG: Trying rule: git_rm_local_modifications; took: 0:00:00.000011
DEBUG: Trying rule: git_rm_recursive; took: 0:00:00.000011
DEBUG: Trying rule: git_rm_staged; took: 0:00:00.000011
DEBUG: Trying rule: git_stash; took: 0:00:00.000011
DEBUG: Trying rule: git_tag_force; took: 0:00:00.000012
DEBUG: Trying rule: git_two_dashes; took: 0:00:00.000011
DEBUG: Trying rule: go_run; took: 0:00:00.000015
DEBUG: Trying rule: gradle_no_task; took: 0:00:00.000018
DEBUG: Trying rule: gradle_wrapper; took: 0:00:00.000014
DEBUG: Trying rule: grep_arguments_order; took: 0:00:00.000014
DEBUG: Trying rule: grep_recursive; took: 0:00:00.000013
DEBUG: Trying rule: grunt_task_not_found; took: 0:00:00.000014
DEBUG: Trying rule: gulp_not_task; took: 0:00:00.000013
DEBUG: Trying rule: has_exists_script; took: 0:00:00.000053
DEBUG: Trying rule: heroku_multiple_apps; took: 0:00:00.000015
DEBUG: Trying rule: heroku_not_command; took: 0:00:00.000015
DEBUG: Trying rule: hostscli; took: 0:00:00.000053
DEBUG: Trying rule: ifconfig_device_not_found; took: 0:00:00.000014
DEBUG: Trying rule: java; took: 0:00:00.000013
DEBUG: Trying rule: javac; took: 0:00:00.000014
DEBUG: Trying rule: lein_not_task; took: 0:00:00.000052
DEBUG: Trying rule: ln_no_hard_link; took: 0:00:00.000007
DEBUG: Trying rule: ln_s_order; took: 0:00:00.000044
DEBUG: Trying rule: ls_all; took: 0:00:00.000015
DEBUG: Trying rule: ls_lah; took: 0:00:00.000012
DEBUG: Trying rule: man; took: 0:00:00.000014
DEBUG: Trying rule: mercurial; took: 0:00:00.000017
DEBUG: Trying rule: mkdir_p; took: 0:00:00.000007
DEBUG: Trying rule: mvn_no_command; took: 0:00:00.000020
DEBUG: Trying rule: mvn_unknown_lifecycle_phase; took: 0:00:00.000012
DEBUG: Trying rule: no_such_file; took: 0:00:00.000608
DEBUG: Trying rule: npm_missing_script; took: 0:00:00.000017
DEBUG: Trying rule: npm_run_script; took: 0:00:00.000012
DEBUG: Trying rule: npm_wrong_command; took: 0:00:00.000059
DEBUG: Trying rule: open; took: 0:00:00.000019
DEBUG: Trying rule: php_s; took: 0:00:00.000019
DEBUG: Trying rule: pip_install; took: 0:00:00.000058
DEBUG: Trying rule: pip_unknown_command; took: 0:00:00.000055
DEBUG: Trying rule: port_already_in_use; took: 0:00:00.000489
DEBUG: Trying rule: prove_recursively; took: 0:00:00.000022
DEBUG: Trying rule: pyenv_no_such_command; took: 0:00:00.000015
DEBUG: Trying rule: python_command; took: 0:00:00.000047
DEBUG: Trying rule: python_execute; took: 0:00:00.000015
DEBUG: Trying rule: quotation_marks; took: 0:00:00.000003
DEBUG: Trying rule: react_native_command_unrecognized; took: 0:00:00.000013
DEBUG: Trying rule: remove_trailing_cedilla; took: 0:00:00.000003
DEBUG: Trying rule: rm_dir; took: 0:00:00.000009
DEBUG: Trying rule: scm_correction; took: 0:00:00.000017
DEBUG: Trying rule: sed_unterminated_s; took: 0:00:00.000013
DEBUG: Trying rule: sl_ls; took: 0:00:00.000002
DEBUG: Trying rule: ssh_known_hosts; took: 0:00:00.000014
DEBUG: Trying rule: sudo; took: 0:00:00.000004
DEBUG: Trying rule: sudo_command_from_user_path; took: 0:00:00.000125
DEBUG: Trying rule: switch_lang; took: 0:00:00.000022
DEBUG: Trying rule: systemctl; took: 0:00:00.000057
DEBUG: Trying rule: tmux; took: 0:00:00.000014
DEBUG: Trying rule: touch; took: 0:00:00.000016
DEBUG: Trying rule: tsuru_login; took: 0:00:00.000013
DEBUG: Trying rule: tsuru_not_command; took: 0:00:00.000012
DEBUG: Trying rule: unknown_command; took: 0:00:00.000114
DEBUG: Trying rule: unsudo; took: 0:00:00.000004
DEBUG: Trying rule: vagrant_up; took: 0:00:00.000015
DEBUG: Trying rule: whois; took: 0:00:00.000014
DEBUG: Trying rule: workon_doesnt_exists; took: 0:00:00.000014
DEBUG: Trying rule: yarn_alias; took: 0:00:00.000013
DEBUG: Trying rule: yarn_command_not_found; took: 0:00:00.000014
DEBUG: Trying rule: yarn_command_replaced; took: 0:00:00.000021
DEBUG: Trying rule: yarn_help; took: 0:00:00.000014
DEBUG: Trying rule: man_no_space; took: 0:00:00.000002
DEBUG: Trying rule: no_command; took: 0:00:00.011747
sudo install python [enter/↑/↓/ctrl+c]
Aborted
DEBUG: Total took: 0:00:04.897380
```
While this is an opinionated review rather than objectively a bug, I would imagine the correction I suggested is a little more intuitive than the current correction.
In case my suggestion is well-accepted, I would be take this issue and make the correction. (Happy Hacktoberfest!)
| null | https://github.com/nvbn/thefuck/pull/977 | null | {'base_commit': '8fa10b1049ddf21f188b9605bcd5afbe33bf33db', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [338]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | ad24759871ea43131711cfce1e5fc69c06d82956 | https://github.com/pandas-dev/pandas/issues/16668 | Clean | CLN: private impl of OrderedDefaultDict can be removed | https://github.com/pandas-dev/pandas/blob/master/pandas/compat/__init__.py#L376
I think this was leftover from 2.6 compat. | null | https://github.com/pandas-dev/pandas/pull/16939 | null | {'base_commit': 'ad24759871ea43131711cfce1e5fc69c06d82956', 'files': [{'path': 'pandas/compat/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}, "('OrderedDefaultdict', None, 376)": {'mod': [376, 378, 379, 380, 381, 382, 383, 384, 385, 386, 387, 389, 390, 391, 392, 393, 395, 396, 397]}}}, {'path': 'pandas/core/panel.py', 'status': 'modified', 'Loc': {"('Panel', 'from_dict', 240)": {'add': [262], 'mod': [265]}, '(None, None, None)': {'mod': [22]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/core/panel.py",
"pandas/compat/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 3694711a7e975324d52c258ab73a8f5e766a3f1c | https://github.com/ansible/ansible/issues/54746 | module
support:community
bug
traceback
affects_2.7
crypto | acme_certificate - dest must include path info or fails | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
Calling acme_certificate with dest set to a pure filename (no path) will cause the module to fail.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
acme_certificate
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.7.9
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/dhagan/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhagan/.local/lib/python2.7/site-packages/ansible
executable location = /home/dhagan/.local/bin/ansible
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
DEFAULT_CALLBACK_WHITELIST(/etc/ansible/ansible.cfg) = [u'timer']
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Ubuntu 18.04 on Windows 10
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Example below uses dns-01 challenge against route53. Either fill in w/ appropriate domain info, or modify as you see fit to allow challenge to pass.
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: letsencrypt
hosts: localhost
connection: local
gather_facts: false
vars:
target: "myhost.mytld.com"
zone: "mytld.com"
contact: "mailto:name@example.com"
tasks:
- name: acme account
acme_account:
account_key_src: test_key.pem
acme_directory: "https://acme-staging-v02.api.letsencrypt.org/directory"
acme_version: 2
allow_creation: true
contact: "{{ contact }}"
state: present
terms_agreed: yes
validate_certs: yes
register: account
- name: create private key
openssl_privatekey:
path: test.key
size: 2048
type: RSA
- name: create CSR if not present
openssl_csr:
common_name: "{{ target }}"
path: test.csr
privatekey_path: test.key
subject_alt_name: "DNS:{{target}}"
- name: acme request
acme_certificate:
account_key_src: test_key.pem
modify_account: no
account_uri: "{{ account.account_uri }}"
challenge: "dns-01"
csr: test.csr
dest: test.cert
terms_agreed: yes
validate_certs: yes
register: acme_request
- name: meet challenge requirements
route53:
zone: "{{ zone }}"
record: "{{ acme_request.challenge_data[target]['dns-01'].record }}"
type: TXT
ttl: 60
state: present
overwrite: yes
wait: yes
value: "{{ acme_request.challenge_data[target]['dns-01'].resource_value | regex_replace('^(.*)$', '\"\\1\"') }}"
when: acme_request is changed
- name: acme certificate
acme_certificate:
account_key_src: test_key.pem
modify_account: no
account_uri: "{{ account.account_uri }}"
challenge: "dns-01"
src: test.csr
dest: test.cert
data: "{{ acme_request }}"
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
Certificate gets written to test.cert in current directory.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
This section of ansible/lib/ansible/module_utils/acme.py, starting at line 121, causes the module to fail because os.path.dirname(dest) for a bare filename is empty.
```
else:
if not os.access(os.path.dirname(dest), os.W_OK):
os.remove(tmpsrc)
raise ModuleFailException("Destination dir %s not writable" % (os.path.dirname(dest)))
```
<!--- Paste verbatim command output between quotes -->
```paste below
dhagan@onmyoji-shi:~/cloud-ansible$ ansible-playbook -vvv test.yml
ansible-playbook 2.7.9
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/dhagan/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/dhagan/.local/lib/python2.7/site-packages/ansible
executable location = /home/dhagan/.local/bin/ansible-playbook
python version = 2.7.15rc1 (default, Nov 12 2018, 14:31:15) [GCC 7.3.0]
Using /etc/ansible/ansible.cfg as config file
/etc/ansible/hosts did not meet host_list requirements, check plugin documentation if this is unexpected
/etc/ansible/hosts did not meet script requirements, check plugin documentation if this is unexpected
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'
PLAYBOOK: test.yml **************************************************************************************************************************************************************************************************************************
1 plays in test.yml
PLAY [letsencrypt] **************************************************************************************************************************************************************************************************************************
META: ran handlers
TASK [acme account] *************************************************************************************************************************************************************************************************************************
task path: /mnt/xxxx/test.yml:12
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan
<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222 `" && echo ansible-tmp-1554239227.37-238894588553222="` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222 `" ) && sleep 0'
Using module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/acme/acme_account.py
<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpmJ9dFJ TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/AnsiballZ_acme_account.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/AnsiballZ_acme_account.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/AnsiballZ_acme_account.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239227.37-238894588553222/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"account_uri": "https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx",
"changed": false,
"invocation": {
"module_args": {
"account_key_content": null,
"account_key_src": "test_key.pem",
"account_uri": null,
"acme_directory": "https://acme-staging-v02.api.letsencrypt.org/directory",
"acme_version": 2,
"allow_creation": true,
"contact": [
"mailto:xxxxxx"
],
"new_account_key_content": null,
"new_account_key_src": null,
"select_crypto_backend": "auto",
"state": "present",
"terms_agreed": true,
"validate_certs": true
}
}
}
TASK [create private key] *******************************************************************************************************************************************************************************************************************
task path: /xxxx/test.yml:24
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan
<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355 `" && echo ansible-tmp-1554239231.41-236536755185355="` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355 `" ) && sleep 0'
Using module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/openssl_privatekey.py
<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpD_PkzS TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/AnsiballZ_openssl_privatekey.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/AnsiballZ_openssl_privatekey.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/AnsiballZ_openssl_privatekey.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239231.41-236536755185355/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"changed": false,
"filename": "test.key",
"fingerprint": {
"md5": "xxxx",
"sha1": "xxxx",
"sha224": "xxxx",
"sha256": "xxxx",
"sha384": "xxxx",
"sha512": "xxxx"
},
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"cipher": null,
"content": null,
"delimiter": null,
"directory_mode": null,
"follow": false,
"force": false,
"group": null,
"mode": null,
"owner": null,
"passphrase": null,
"path": "test.key",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"size": 2048,
"src": null,
"state": "present",
"type": "RSA",
"unsafe_writes": null
}
},
"size": 2048,
"type": "RSA"
}
TASK [create CSR if not present] ************************************************************************************************************************************************************************************************************
task path: /xxxx/test.yml:30
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan
<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665 `" && echo ansible-tmp-1554239232.48-242780450892665="` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665 `" ) && sleep 0'
Using module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/openssl_csr.py
<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpOlkhLU TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/AnsiballZ_openssl_csr.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/AnsiballZ_openssl_csr.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/AnsiballZ_openssl_csr.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239232.48-242780450892665/ > /dev/null 2>&1 && sleep 0'
ok: [localhost] => {
"basicConstraints": null,
"changed": false,
"extendedKeyUsage": null,
"filename": "test.csr",
"invocation": {
"module_args": {
"attributes": null,
"backup": null,
"basicConstraints": null,
"basicConstraints_critical": false,
"commonName": "xxxx",
"common_name": "xxxx",
"content": null,
"countryName": null,
"delimiter": null,
"digest": "sha256",
"directory_mode": null,
"emailAddress": null,
"extendedKeyUsage": null,
"extendedKeyUsage_critical": false,
"follow": false,
"force": false,
"group": null,
"keyUsage": null,
"keyUsage_critical": false,
"localityName": null,
"mode": null,
"ocspMustStaple": false,
"ocspMustStaple_critical": false,
"organizationName": null,
"organizationalUnitName": null,
"owner": null,
"path": "test.csr",
"privatekey_passphrase": null,
"privatekey_path": "test.key",
"regexp": null,
"remote_src": null,
"selevel": null,
"serole": null,
"setype": null,
"seuser": null,
"src": null,
"state": "present",
"stateOrProvinceName": null,
"subject": null,
"subjectAltName": [
"DNS:xxxx"
],
"subjectAltName_critical": false,
"subject_alt_name": "DNS:xxxx",
"unsafe_writes": null,
"version": 1
}
},
"keyUsage": null,
"ocspMustStaple": false,
"privatekey": "test.key",
"subject": [
[
"CN",
"xxxx"
]
],
"subjectAltName": [
"DNS:xxxx"
]
}
TASK [acme request] *************************************************************************************************************************************************************************************************************************
task path: /xxx/test.yml:37
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan
<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495 `" && echo ansible-tmp-1554239233.51-137769391892495="` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495 `" ) && sleep 0'
Using module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/acme/acme_certificate.py
<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpgY1bDo TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/AnsiballZ_acme_certificate.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/AnsiballZ_acme_certificate.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/AnsiballZ_acme_certificate.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239233.51-137769391892495/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"account_uri": "https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx",
"authorizations": {
"xxxx": {
"challenges": [
{
"status": "pending",
"token": "xxxx",
"type": "dns-01",
"uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx"
},
{
"status": "pending",
"token": "xxxx",
"type": "tls-alpn-01",
"uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx"
},
{
"status": "pending",
"token": "xxxx",
"type": "http-01",
"uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx"
}
],
"combinations": [
[
1
],
[
2
],
[
0
]
],
"expires": "2019-04-09T21:07:15Z",
"identifier": {
"type": "dns",
"value": "xxxx"
},
"status": "pending",
"uri": "https://acme-staging.api.letsencrypt.org/acme/authz/xxxx"
}
},
"cert_days": -1,
"challenge_data": {
"test-name.dhagan.dev.nsoc.state911.us": {
"dns-01": {
"record": "_acme-challenge.xxxx",
"resource": "_acme-challenge",
"resource_value": "xxxx"
},
"http-01": {
"resource": ".well-known/acme-challenge/xxxx",
"resource_value": "xxxx"
},
"tls-alpn-01": {
"resource": "xxxx",
"resource_value": "xxxx"
}
}
},
"challenge_data_dns": {
"_acme-challenge.xxxx": [
"xxxx"
]
},
"changed": true,
"finalize_uri": null,
"invocation": {
"module_args": {
"account_email": null,
"account_key_content": null,
"account_key_src": "test_key.pem",
"account_uri": "https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx",
"acme_directory": "https://acme-staging.api.letsencrypt.org/directory",
"acme_version": 1,
"agreement": null,
"chain_dest": null,
"challenge": "dns-01",
"csr": "test.csr",
"data": null,
"deactivate_authzs": false,
"dest": "test.cert",
"force": false,
"fullchain_dest": null,
"modify_account": false,
"remaining_days": 10,
"select_crypto_backend": "auto",
"terms_agreed": true,
"validate_certs": true
}
},
"order_uri": null
}
TASK [meet challenge requirements] **********************************************************************************************************************************************************************************************************
task path: /xxxx/test.yml:49
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan
<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110 `" && echo ansible-tmp-1554239236.67-178715692795110="` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110 `" ) && sleep 0'
Using module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/cloud/amazon/route53.py
<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpCRBtC3 TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/AnsiballZ_route53.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/AnsiballZ_route53.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/AnsiballZ_route53.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239236.67-178715692795110/ > /dev/null 2>&1 && sleep 0'
changed: [localhost] => {
"changed": true,
"invocation": {
"module_args": {
"alias": null,
"alias_evaluate_target_health": false,
"alias_hosted_zone_id": null,
"aws_access_key": null,
"aws_secret_key": null,
"ec2_url": null,
"failover": null,
"health_check": null,
"hosted_zone_id": null,
"identifier": null,
"overwrite": true,
"private_zone": false,
"profile": null,
"record": "_acme-challenge.xxxx",
"region": null,
"retry_interval": "500",
"security_token": null,
"state": "present",
"ttl": 60,
"type": "TXT",
"validate_certs": true,
"value": [
"\"xxxx\""
],
"vpc_id": null,
"wait": true,
"wait_timeout": 600,
"weight": null,
"zone": "xxxx"
}
}
}
TASK [acme certificate] *********************************************************************************************************************************************************************************************************************
task path: /xxxx/test.yml:62
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: dhagan
<127.0.0.1> EXEC /bin/sh -c 'echo ~dhagan && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429 `" && echo ansible-tmp-1554239274.52-105161436966429="` echo /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429 `" ) && sleep 0'
Using module file /home/dhagan/.local/lib/python2.7/site-packages/ansible/modules/crypto/acme/acme_certificate.py
<127.0.0.1> PUT /home/dhagan/.ansible/tmp/ansible-local-12205ml_9CV/tmpvcA5GB TO /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/AnsiballZ_acme_certificate.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/ /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/AnsiballZ_acme_certificate.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/AnsiballZ_acme_certificate.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/dhagan/.ansible/tmp/ansible-tmp-1554239274.52-105161436966429/ > /dev/null 2>&1 && sleep 0'
The full traceback is:
WARNING: The below traceback may *not* be related to the actual failure.
File "/tmp/ansible_acme_certificate_payload_B3NU6E/__main__.py", line 931, in main
client.get_certificate()
File "/tmp/ansible_acme_certificate_payload_B3NU6E/__main__.py", line 824, in get_certificate
if self.dest and write_file(self.module, self.dest, pem_cert.encode('utf8')):
File "/tmp/ansible_acme_certificate_payload_B3NU6E/ansible_acme_certificate_payload.zip/ansible/module_utils/acme.py", line 138, in write_file
raise ModuleFailException("Destination dir %s not writable" % (os.path.dirname(dest)))
fatal: [localhost]: FAILED! => {
"changed": false,
"invocation": {
"module_args": {
"account_email": null,
"account_key_content": null,
"account_key_src": "test_key.pem",
"account_uri": "https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx",
"acme_directory": "https://acme-staging.api.letsencrypt.org/directory",
"acme_version": 1,
"agreement": null,
"chain_dest": null,
"challenge": "dns-01",
"csr": "test.csr",
"data": {
"account_uri": "https://acme-staging-v02.api.letsencrypt.org/acme/acct/xxxx",
"authorizations": {
"test-name.dhagan.dev.nsoc.state911.us": {
"challenges": [
{
"status": "pending",
"token": "xxxx",
"type": "dns-01",
"uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx"
},
{
"status": "pending",
"token": "xxxx",
"type": "tls-alpn-01",
"uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/xxxxx/xxxx"
},
{
"status": "pending",
"token": "xxxx",
"type": "http-01",
"uri": "https://acme-staging.api.letsencrypt.org/acme/challenge/xxxx/xxxx"
}
],
"combinations": [
[
1
],
[
2
],
[
0
]
],
"expires": "2019-04-09T21:07:15Z",
"identifier": {
"type": "dns",
"value": "xxxx"
},
"status": "pending",
"uri": "https://acme-staging.api.letsencrypt.org/acme/authz/xxxx"
}
},
"cert_days": -1,
"challenge_data": {
"xxxx": {
"dns-01": {
"record": "_acme-challenge.xxxx",
"resource": "_acme-challenge",
"resource_value": "xxxx"
},
"http-01": {
"resource": ".well-known/acme-challenge/xxxx",
"resource_value": "xxxx"
},
"tls-alpn-01": {
"resource": "xxxx",
"resource_value": "xxxx"
}
}
},
"challenge_data_dns": {
"_acme-challenge.xxxx": [
"xxxx"
]
},
"changed": true,
"failed": false,
"finalize_uri": null,
"order_uri": null
},
"deactivate_authzs": false,
"dest": "test.cert",
"force": false,
"fullchain_dest": null,
"modify_account": false,
"remaining_days": 10,
"select_crypto_backend": "auto",
"src": "test.csr",
"terms_agreed": false,
"validate_certs": true
}
},
"msg": "Destination dir not writable",
"other": {}
}
to retry, use: --limit @/xxxx/test.retry
PLAY RECAP **********************************************************************************************************************************************************************************************************************************
localhost : ok=5 changed=2 unreachable=0 failed=1
Playbook run took 0 days, 0 hours, 1 minutes, 0 seconds
```
| null | https://github.com/ansible/ansible/pull/54754 | null | {'base_commit': '3694711a7e975324d52c258ab73a8f5e766a3f1c', 'files': [{'path': 'lib/ansible/module_utils/acme.py', 'status': 'modified', 'Loc': {"(None, 'write_file', 79)": {'mod': [122, 124]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/module_utils/acme.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | 41750a6cff55e401364568868d619747de3db037 | https://github.com/huggingface/transformers/issues/3785 | wontfix
Core: Encoder-Decoder | How to fine tune EncoderDecoder model for training a new corpus of data ? | is there any documentation available for the same? | null | https://github.com/huggingface/transformers/pull/3383 | null | {'base_commit': '41750a6cff55e401364568868d619747de3db037', 'files': [{'path': 'docs/source/index.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [91]}}}, {'path': 'src/transformers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [43], 'mod': [270]}}}, {'path': 'src/transformers/configuration_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27, 84]}}}, {'path': 'src/transformers/modeling_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [29, 88, 221]}}}, {'path': 'src/transformers/modeling_bert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [961]}}}, {'path': 'src/transformers/modeling_encoder_decoder.py', 'status': 'modified', 'Loc': {"('PreTrainedEncoderDecoder', None, 29)": {'add': [36], 'mod': [29, 31, 33, 35, 38, 39, 44, 158, 159, 160]}, "('PreTrainedEncoderDecoder', '__init__', 38)": {'add': [41]}, "('PreTrainedEncoderDecoder', 'from_pretrained', 44)": {'add': [145, 150], 'mod': [46, 47, 50, 54, 55, 58, 65, 75, 76, 78, 79, 80, 82, 83, 84, 85, 87, 88, 89, 91, 92, 94, 95, 96, 98, 99, 104, 105, 107, 111, 112, 115, 116, 117, 118, 119, 120, 121, 122, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 154]}, '(None, None, None)': {'mod': [19, 21, 23]}, "('PreTrainedEncoderDecoder', 'save_pretrained', 158)": {'mod': [162, 165, 166, 167, 169, 170, 171, 172, 173, 174, 176, 177, 178, 179, 180, 181, 183, 184, 185, 186, 187, 188, 189, 190, 192, 194, 195, 196, 197, 199, 200, 201, 202, 204, 205, 207, 208, 209, 210, 211, 213, 214]}, "('PreTrainedEncoderDecoder', 'forward', 204)": {'mod': [216, 217, 218, 219, 220, 221, 222, 223, 225, 226, 227, 228, 229, 231, 233, 234, 236]}}}, {'path': 'src/transformers/modeling_utils.py', 'status': 'modified', 'Loc': {"('PreTrainedModel', 'generate', 764)": {'mod': [1014]}}}, {'path': 'src/transformers/utils_encoder_decoder.py', 'status': 'removed', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/configuration_auto.py",
"src/transformers/__init__.py",
"src/transformers/modeling_auto.py",
"src/transformers/utils_encoder_decoder.py",
"src/transformers/modeling_utils.py",
"src/transformers/modeling_bert.py",
"src/transformers/modeling_encoder_decoder.py"
],
"doc": [
"docs/source/index.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 2908a2c32a81fca78277a22f15fa8e3abe75e092 | https://github.com/ansible/ansible/issues/71517 | easyfix
python3
module
support:core
bug
has_pr
P3
system
affects_2.9 | Reboot module doesn't work with async | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
The `reboot` module does not work with `async`, `poll`, and `async_status`. Suppose I have 10 nodes to reboot, but I can only set `fork` to 2. The `reboot` module will reboot 2 nodes at a time. I tried using `async`, `poll`, and `async_status` to kick of the reboots on the 10 nodes, 2 at a time, and then poll for the results. `async` and `poll` seem to do nothing on the `reboot` module as the behavior remains the same as without them.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
`reboot` module
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.12
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/user/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib64/python3.6/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.6.8 (default, May 2 2019, 19:37:42) [GCC 4.4.7 20120313 (Red Hat 4.4.7-23)]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
ANSIBLE_SSH_ARGS(/etc/ansible/ansible.cfg) = -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null
ANSIBLE_SSH_RETRIES(/etc/ansible/ansible.cfg) = 2
COMMAND_WARNINGS(/etc/ansible/ansible.cfg) = False
DEFAULT_FORKS(/etc/ansible/ansible.cfg) = 2
DEFAULT_GATHERING(/etc/ansible/ansible.cfg) = explicit
DEFAULT_TIMEOUT(/etc/ansible/ansible.cfg) = 40
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = False
RETRY_FILES_ENABLED(/etc/ansible/ansible.cfg) = False
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
Described in the summary
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I expect the `reboot` module to start the reboot on 2 nodes, then move on to something else (like start the reboot on another 2 nodes), then come back to check on the results of the reboots by using `async`, `poll`, and `async_status`.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The `reboot` module ignores `async` and `poll`.
<!--- Paste verbatim command output between quotes -->
```paste below
```
| null | https://github.com/ansible/ansible/pull/80017 | null | {'base_commit': '2908a2c32a81fca78277a22f15fa8e3abe75e092', 'files': [{'path': 'lib/ansible/plugins/action/reboot.py', 'status': 'modified', 'Loc': {"('ActionModule', 'run', 409)": {'mod': [411]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "0",
"info_type": "Code"
} | {
"code": [
"lib/ansible/plugins/action/reboot.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
pandas-dev | pandas | c482b5727e3bd98b6f9780e51615791e413d542d | https://github.com/pandas-dev/pandas/issues/29916 | Enhancement
IO HDF5 | HDF5: empty groups and keys | Hi,
With some of the hdf5 files I have, `pandas.HDFStore.groups()` returns an empty list. (as does `.keys()` which iterates over the groups). However, the data are accessible via `.get()` or `.get_node()`.
This is related to #21543 and #21372 where the `.groups()` logic was changed, in particular using `self._handle.walk_groups()` instead of `self._handle.walk_nodes()`, now to be found here:
https://github.com/pandas-dev/pandas/blob/ea2e26ae7d700d7fd363ea5bfc05d2fe3fb8a5ee/pandas/io/pytables.py#L1212
#### Current Output
```python
>>> hdf.groups()
[]
```
```python
>>> hdf.keys()
[]
```
#### Expected Ouptut
List of groups and keys as visible with e.g. `h5dump`.
**Note:** Changing the aforementioned line back to use `.walk_nodes()` fixes the issue and lists the groups and keys properly:
```python
>>> hdf.groups()
[/Data/Table Layout (Table(69462,), zlib(4)) ''
description := {
...
/Data/Array Layout/2D Parameters/Data Parameters (Table(15,)) ''
description := {
"mnemonic": StringCol(itemsize=8, shape=(), dflt=b'', pos=0),
"description": StringCol(itemsize=48, shape=(), dflt=b'', pos=1),
"isError": Int64Col(shape=(), dflt=0, pos=2),
"units": StringCol(itemsize=7, shape=(), dflt=b'', pos=3),
"category": StringCol(itemsize=31, shape=(), dflt=b'', pos=4)}
byteorder := 'little'
chunkshape := (642,)]]
```
```python
>>> hdf.keys()
['/Data/Table Layout',
'/Metadata/Data Parameters',
'/Metadata/Experiment Notes',
'/Metadata/Experiment Parameters',
'/Metadata/Independent Spatial Parameters',
'/Metadata/_record_layout',
'/Data/Array Layout/Layout Description',
'/Data/Array Layout/1D Parameters/Data Parameters',
'/Data/Array Layout/2D Parameters/Data Parameters']
```
#### Fix
One solution would be (I guess) to revert #21543, another to fix at least `.keys()` to use `._handle.walk_nodes()` instead of `.groups()` in
https://github.com/pandas-dev/pandas/blob/ea2e26ae7d700d7fd363ea5bfc05d2fe3fb8a5ee/pandas/io/pytables.py#L562
Could also be that it is a bug in `pytables`.
#### Problem background
I was trying to figure out why some hdf5 files open fine with `pandas` but fail with `dask`.
The reason is that `dask` allows wildcards and iterates over the keys to find valid ones. If `.keys()` is empty, reading the files with `dask` fails.
#### Output of ``pd.show_versions()``
<details>
INSTALLED VERSIONS
------------------
commit : None
python : 3.7.3.final.0
python-bits : 64
OS : Linux
OS-release : 3.10.0-957.27.2.el7.x86_64
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : C
LOCALE : en_US.UTF-8
pandas : 0.25.3
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 42.0.1.post20191125
Cython : None
pytest : 5.0.1
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : 4.4.2
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.10.0
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : 4.4.2
matplotlib : 3.1.2
numexpr : 2.7.0
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
s3fs : None
scipy : 1.3.2
sqlalchemy : None
tables : 3.6.1
xarray : 0.14.1
xlrd : None
xlwt : None
xlsxwriter : None
</details>
| null | https://github.com/pandas-dev/pandas/pull/32723 | null | {'base_commit': 'c482b5727e3bd98b6f9780e51615791e413d542d', 'files': [{'path': 'doc/source/whatsnew/v1.1.0.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [964]}}}, {'path': 'pandas/io/pytables.py', 'status': 'modified', 'Loc': {"('HDFStore', 'keys', 583)": {'add': [586, 590], 'mod': [592]}, "('HDFStore', None, 442)": {'mod': [583]}}}, {'path': 'pandas/tests/io/pytables/test_store.py', 'status': 'modified', 'Loc': {"('TestHDFStore', None, 66)": {'add': [343]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"pandas/io/pytables.py"
],
"doc": [
"doc/source/whatsnew/v1.1.0.rst"
],
"test": [
"pandas/tests/io/pytables/test_store.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | f3372a3753643fea601564c01fcf65cc25a2db62 | https://github.com/scrapy/scrapy/issues/4667 | bug | List of fields incorrectly accessed for dataclass items | ### Description
If I make a `dataclass` item and want to export to csv, I get this error:
```
...
File "/home/tadej/miniconda3/envs/main/lib/python3.7/site-packages/scrapy/exporters.py", line 251, in _write_headers_and_set_fields_to_export
self.fields_to_export = list(item.fields.keys())
AttributeError: 'CompanyItem' object has no attribute 'fields'
```
The problem stems from here
https://github.com/scrapy/scrapy/blob/master/scrapy/exporters.py#L243-L253
There should be an additional if case checking if the item is of type dataclass, and then accessing the fields differently, perhaps as
```python
[field.name for field in fields(item)]
```
| null | https://github.com/scrapy/scrapy/pull/4668 | null | {'base_commit': 'f3372a3753643fea601564c01fcf65cc25a2db62', 'files': [{'path': 'scrapy/exporters.py', 'status': 'modified', 'Loc': {"('CsvItemExporter', '_write_headers_and_set_fields_to_export', 243)": {'mod': [246, 247, 248, 249, 250, 251]}}}, {'path': 'tests/test_exporters.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 25, 159, 168, 199, 221, 311, 414, 451, 529]}, "('BaseItemExporterTest', None, 26)": {'add': [27]}, "('BaseItemExporterTest', 'setUp', 28)": {'mod': [29]}, "('BaseItemExporterTest', '_assert_expected_item', 39)": {'mod': [42]}, "('BaseItemExporterTest', 'test_export_dict_item', 65)": {'mod': [66]}, "('BaseItemExporterTest', 'test_serialize_field', 68)": {'mod': [69, 72]}, "('BaseItemExporterTest', 'test_field_custom_serializer', 84)": {'mod': [85, 86, 88, 89, 90, 92, 94, 95, 96]}, "('PythonItemExporterTest', 'test_nested_item', 107)": {'mod': [108, 110]}, "('PythonItemExporterTest', 'test_export_list', 121)": {'mod': [122, 123, 124]}, "('PythonItemExporterTest', 'test_export_item_dict_list', 134)": {'mod': [135, 137]}, "('PythonItemExporterTest', 'test_export_binary', 147)": {'mod': [149]}, "('PickleItemExporterTest', 'test_export_multiple_items', 177)": {'mod': [178, 179, 187, 188]}, "('CsvItemExporterTest', 'test_header_export_all', 245)": {'mod': [248]}, "('CsvItemExporterTest', 'test_header_export_all_dict', 252)": {'mod': [254]}, "('CsvItemExporterTest', 'test_header_export_single_field', 258)": {'mod': [259]}, "('CsvItemExporterTest', 'test_header_export_two_items', 266)": {'mod': [267]}, "('CsvItemExporterTest', 'test_header_no_header_line', 277)": {'mod': [278]}, "('XmlItemExporterTest', 'xmltuple', 318)": {'mod': [321, 322]}, "('XmlItemExporterTest', 'test_multivalued_fields', 346)": {'mod': [348, 349, 350, 351, 352]}, "('XmlItemExporterTest', 'test_nested_item', 355)": {'mod': [356, 358]}, "('XmlItemExporterTest', 'test_nested_list_item', 378)": {'mod': [379, 381]}, "('JsonLinesItemExporterTest', '_check_output', 422)": {'mod': [424]}, "('JsonLinesItemExporterTest', 'test_nested_item', 426)": {'mod': [427, 429]}, "('JsonItemExporterTest', '_check_output', 459)": {'mod': [461]}, "('JsonItemExporterTest', 'assertTwoItemsExported', 463)": {'mod': [469]}, "('JsonItemExporterTest', 'test_two_dict_items', 474)": {'mod': [475]}, "('JsonItemExporterTest', 'test_nested_item', 477)": {'mod': [478, 479, 480, 485]}, "('JsonItemExporterTest', 'test_nested_dict_item', 488)": {'mod': [490]}, "('CustomItemExporterTest', None, 509)": {'mod': [509]}, "('CustomItemExporterTest', 'test_exporter_custom_serializer', 511)": {'mod': [519, 522, 523]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scrapy/exporters.py"
],
"doc": [],
"test": [
"tests/test_exporters.py"
],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 2b8e056d5d4d14665b88a01c41356253c94b9259 | https://github.com/AntonOsika/gpt-engineer/issues/322 | enhancement
good first issue | Print and store how many tokens were used in memory/logs | In this way, we can also store this to benchmark results.
A huge increase in tokens will not be worth a minor improvement in benchmark resultss. | null | https://github.com/AntonOsika/gpt-engineer/pull/438 | null | {'base_commit': '2b8e056d5d4d14665b88a01c41356253c94b9259', 'files': [{'path': 'gpt_engineer/ai.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 7, 11, 56]}, "('AI', 'next', 34)": {'add': [54]}, "('AI', None, 12)": {'mod': [17, 34]}, "('AI', 'start', 17)": {'mod': [23]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [63]}}}, {'path': 'gpt_engineer/steps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 37]}, "(None, 'clarify', 48)": {'add': [73], 'mod': [55]}, "(None, 'respec', 107)": {'add': [121], 'mod': [111]}, "(None, 'gen_entrypoint', 212)": {'add': [226]}, "(None, 'simple_gen', 41)": {'mod': [43]}, "(None, 'gen_spec', 90)": {'mod': [100]}, "(None, 'gen_unit_tests', 128)": {'mod': [138]}, "(None, 'gen_clarified_code', 146)": {'mod': [154]}, "(None, 'gen_code', 160)": {'mod': [169]}, "(None, 'use_feedback', 236)": {'mod': [243]}, "(None, 'fix_code', 248)": {'mod': [256]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/ai.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [],
"config": [
"pyproject.toml"
],
"asset": []
} | 1 |
huggingface | transformers | 6dda14dc47d82f0e32df05fea8ba6444ba52b90a | https://github.com/huggingface/transformers/issues/20058 | Push to Hub fails with `model_name` | ### System Info
- `transformers` version: 4.25.0.dev0
- Platform: Linux-5.15.0-48-generic-x86_64-with-glibc2.31
- Python version: 3.9.13
- Huggingface_hub version: 0.10.1
- PyTorch version (GPU?): 1.13.0+cu117 (True)
- Tensorflow version (GPU?): not installed (NA)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Using GPU in script?: yes
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```python
from datasets import load_dataset, DatasetDict
common_voice = DatasetDict()
#common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train+validation", use_auth_token=True)
#common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test", use_auth_token=True)
common_voice["train"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="train[:1%]+validation[:1%]", use_auth_token=True)
common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "sv-SE", split="test[:1%]", use_auth_token=True)
print(common_voice)
common_voice = common_voice.remove_columns(["accent", "age", "client_id", "down_votes", "gender", "locale", "path", "segment", "up_votes"])
print(common_voice)
from transformers import WhisperFeatureExtractor
feature_extractor = WhisperFeatureExtractor.from_pretrained("openai/whisper-small")
from transformers import WhisperTokenizer
tokenizer = WhisperTokenizer.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
from transformers import WhisperProcessor
processor = WhisperProcessor.from_pretrained("openai/whisper-small", language="swedish", task="transcribe")
print(common_voice["train"][0])
from datasets import Audio
common_voice = common_voice.cast_column("audio", Audio(sampling_rate=16000))
print(common_voice["train"][0])
def prepare_dataset(batch):
# load and resample audio data from 48 to 16kHz
audio = batch["audio"]
# compute log-Mel input features from input audio array
batch["input_features"] = feature_extractor(audio["array"], sampling_rate=audio["sampling_rate"]).input_features[0]
# encode target text to label ids
batch["labels"] = tokenizer(batch["sentence"]).input_ids
return batch
common_voice = common_voice.map(prepare_dataset, remove_columns=common_voice.column_names["train"], num_proc=1)
import torch
from dataclasses import dataclass
from typing import Any, Dict, List, Union
@dataclass
class DataCollatorSpeechSeq2SeqWithPadding:
processor: Any
def __call__(self, features: List[Dict[str, Union[List[int], torch.Tensor]]]) -> Dict[str, torch.Tensor]:
# split inputs and labels since they have to be of different lengths and need different padding methods
# first treat the audio inputs by simply returning torch tensors
input_features = [{"input_features": feature["input_features"]} for feature in features]
batch = self.processor.feature_extractor.pad(input_features, return_tensors="pt")
# get the tokenized label sequences
label_features = [{"input_ids": feature["labels"]} for feature in features]
# pad the labels to max length
labels_batch = self.processor.tokenizer.pad(label_features, return_tensors="pt")
# replace padding with -100 to ignore loss correctly
labels = labels_batch["input_ids"].masked_fill(labels_batch.attention_mask.ne(1), -100)
# if bos token is appended in previous tokenization step,
# cut bos token here as it's append later anyways
if (labels[:, 0] == self.processor.tokenizer.bos_token_id).all().cpu().item():
labels = labels[:, 1:]
batch["labels"] = labels
return batch
"""Let's initialise the data collator we've just defined:"""
data_collator = DataCollatorSpeechSeq2SeqWithPadding(processor=processor)
import evaluate
metric = evaluate.load("wer")
def compute_metrics(pred):
pred_ids = pred.predictions
label_ids = pred.label_ids
# replace -100 with the pad_token_id
label_ids[label_ids == -100] = tokenizer.pad_token_id
# we do not want to group tokens when computing the metrics
pred_str = tokenizer.batch_decode(pred_ids, skip_special_tokens=True)
label_str = tokenizer.batch_decode(label_ids, skip_special_tokens=True)
wer = 100 * metric.compute(predictions=pred_str, references=label_str)
return {"wer": wer}
from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-sv-test2", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=10,
gradient_checkpointing=True,
fp16=True,
group_by_length=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
)
from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice["train"],
eval_dataset=common_voice["test"],
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
trainer.train()
"""Our best WER is 32.0% - not bad for 8h of training data! We can submit our checkpoint to the [`hf-speech-bench`](https://huggingface.co/spaces/huggingface/hf-speech-bench) on push by setting the appropriate key-word arguments (kwargs):"""
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
#"model_name": "WhisperSmallSwedishBirgerMoell", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-small",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
trainer.push_to_hub(**kwargs)
from transformers import pipeline
import gradio as gr
pipe = pipeline(model="birgermoell/whisper-small-sv-test2") # change to "your-username/the-name-you-picked"
def transcribe(audio):
text = pipe(audio)["text"]
return text
iface = gr.Interface(
fn=transcribe,
inputs=gr.Audio(source="microphone", type="filepath"),
outputs="text",
title="Whisper Small SV",
description="Realtime demo for Swedish speech recognition using a fine-tuned Whisper small model.",
)
iface.launch()
```
### Expected behavior
The following script is a downloaded version of the colab notebook that follows the whisper fine-tuning tutorial.
https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/fine_tune_whisper.ipynb
One edit was that I removed the model name since I had an issue that it was complaining about two model names that made it impossible to upload. The script just runs on 1% of the dataset on 10 epochs.
kwargs = {
"dataset_tags": "mozilla-foundation/common_voice_11_0",
"dataset": "Common Voice 11.0", # a 'pretty' name for the training dataset
"language": "sv",
#"model_name": "WhisperSmallSwedishBirgerMoell", # a 'pretty' name for our model
"finetuned_from": "openai/whisper-small",
"tasks": "automatic-speech-recognition",
"tags": "hf-asr-leaderboard",
}
https://huggingface.co/birgermoell/whisper-small-sv-test2
I also ran into similar issues when I trained a model on the whole dataset.
https://huggingface.co/birgermoell/whisper-small-sv
| null | https://github.com/huggingface/transformers/pull/20117 | null | {'base_commit': '6dda14dc47d82f0e32df05fea8ba6444ba52b90a', 'files': [{'path': 'src/transformers/models/clip/processing_clip.py', 'status': 'modified', 'Loc': {"('CLIPProcessor', 'decode', 102)": {'add': [107]}}}, {'path': 'src/transformers/models/flava/processing_flava.py', 'status': 'modified', 'Loc': {"('FlavaProcessor', 'decode', 119)": {'add': [124]}}}, {'path': 'src/transformers/models/layoutlmv2/processing_layoutlmv2.py', 'status': 'modified', 'Loc': {"('LayoutLMv2Processor', 'decode', 155)": {'add': [160]}}}, {'path': 'src/transformers/models/layoutlmv3/processing_layoutlmv3.py', 'status': 'modified', 'Loc': {"('LayoutLMv3Processor', 'decode', 153)": {'add': [158]}}}, {'path': 'src/transformers/models/layoutxlm/processing_layoutxlm.py', 'status': 'modified', 'Loc': {"('LayoutXLMProcessor', 'decode', 155)": {'add': [160]}}}, {'path': 'src/transformers/models/markuplm/processing_markuplm.py', 'status': 'modified', 'Loc': {"('MarkupLMProcessor', 'decode', 135)": {'add': [140]}}}, {'path': 'src/transformers/models/owlvit/processing_owlvit.py', 'status': 'modified', 'Loc': {"('OwlViTProcessor', 'decode', 156)": {'add': [161]}}}, {'path': 'src/transformers/models/vilt/processing_vilt.py', 'status': 'modified', 'Loc': {"('ViltProcessor', 'decode', 103)": {'add': [108]}}}, {'path': 'src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py', 'status': 'modified', 'Loc': {"('VisionTextDualEncoderProcessor', None, 25)": {'add': [129]}}}, {'path': 'src/transformers/models/x_clip/processing_x_clip.py', 'status': 'modified', 'Loc': {"('XCLIPProcessor', 'decode', 104)": {'add': [109]}}}, {'path': 'src/transformers/processing_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [229]}}}, {'path': 'tests/models/clip/test_processor_clip.py', 'status': 'modified', 'Loc': {"('CLIPProcessorTest', 'test_tokenizer_decode', 178)": {'add': [189]}}}, {'path': 'tests/models/flava/test_processor_flava.py', 'status': 'modified', 'Loc': {"('FlavaProcessorTest', 'test_tokenizer_decode', 222)": {'add': [233]}}}, {'path': 'tests/models/layoutlmv2/test_processor_layoutlmv2.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21]}, "('LayoutLMv2ProcessorTest', None, 37)": {'add': [88, 135]}}}, {'path': 'tests/models/layoutlmv3/test_processor_layoutlmv3.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21, 148]}, "('LayoutLMv3ProcessorTest', None, 37)": {'add': [101]}}}, {'path': 'tests/models/layoutxlm/test_processor_layoutxlm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21]}, "('LayoutXLMProcessorTest', None, 43)": {'add': [76, 128]}}}, {'path': 'tests/models/markuplm/test_processor_markuplm.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [135]}}}, {'path': 'tests/models/mctct/test_processor_mctct.py', 'status': 'modified', 'Loc': {"('MCTCTProcessorTest', 'test_tokenizer_decode', 135)": {'add': [146]}}}, {'path': 'tests/models/owlvit/test_processor_owlvit.py', 'status': 'modified', 'Loc': {"('OwlViTProcessorTest', 'test_tokenizer_decode', 230)": {'add': [241]}}}, {'path': 'tests/models/speech_to_text/test_processor_speech_to_text.py', 'status': 'modified', 'Loc': {"('Speech2TextProcessorTest', 'test_tokenizer_decode', 135)": {'add': [146]}}}, {'path': 'tests/models/vision_text_dual_encoder/test_processor_vision_text_dual_encoder.py', 'status': 'modified', 'Loc': {"('VisionTextDualEncoderProcessorTest', 'test_tokenizer_decode', 159)": {'add': [170]}}}, {'path': 'tests/models/wav2vec2/test_processor_wav2vec2.py', 'status': 'modified', 'Loc': {"('Wav2Vec2ProcessorTest', 'test_tokenizer_decode', 128)": {'add': [139]}}}, {'path': 'tests/models/wav2vec2_with_lm/test_processor_wav2vec2_with_lm.py', 'status': 'modified', 'Loc': {"('Wav2Vec2ProcessorWithLMTest', None, 49)": {'add': [369]}}}, {'path': 'tests/models/whisper/test_processor_whisper.py', 'status': 'modified', 'Loc': {"('WhisperProcessorTest', 'test_tokenizer_decode', 107)": {'add': [118]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/models/clip/processing_clip.py",
"src/transformers/models/layoutlmv2/processing_layoutlmv2.py",
"src/transformers/models/vilt/processing_vilt.py",
"src/transformers/models/x_clip/processing_x_clip.py",
"src/transformers/models/markuplm/processing_markuplm.py",
"src/transformers/models/flava/processing_flava.py",
"src/transformers/processing_utils.py",
"src/transformers/models/owlvit/processing_owlvit.py",
"src/transformers/models/layoutlmv3/processing_layoutlmv3.py",
"src/transformers/models/vision_text_dual_encoder/processing_vision_text_dual_encoder.py",
"src/transformers/models/layoutxlm/processing_layoutxlm.py"
],
"doc": [],
"test": [
"tests/models/mctct/test_processor_mctct.py",
"tests/models/layoutlmv2/test_processor_layoutlmv2.py",
"tests/models/layoutlmv3/test_processor_layoutlmv3.py",
"tests/models/owlvit/test_processor_owlvit.py",
"tests/models/markuplm/test_processor_markuplm.py",
"tests/models/layoutxlm/test_processor_layoutxlm.py",
"tests/models/whisper/test_processor_whisper.py",
"tests/models/speech_to_text/test_processor_speech_to_text.py",
"tests/models/clip/test_processor_clip.py",
"tests/models/flava/test_processor_flava.py",
"tests/models/wav2vec2/test_processor_wav2vec2.py",
"tests/models/vision_text_dual_encoder/test_processor_vision_text_dual_encoder.py",
"tests/models/wav2vec2_with_lm/test_processor_wav2vec2_with_lm.py"
],
"config": [],
"asset": []
} | 1 | |
ansible | ansible | b8e8fb48a84b65e805aecd263ebb7cd303e671ee | https://github.com/ansible/ansible/issues/34896 | networking
module
affects_2.4
support:community
aci
feature
cisco | aci_epg module needs to support PreferredGroup |
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
aci_epg
##### ANSIBLE VERSION
```
[root@ansible-server ~]# ansible --version
ansible 2.4.0.0
config file = None
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Aug 4 2017, 00:39:18) [GCC 4.8.5 20150623 (Red Hat 4.8.5-16)]
[root@ansible-server ~]#
```
##### CONFIGURATION
[root@ansible-server ~]# ansible-config dump --only-changed
[root@ansible-server ~]#
##### OS / ENVIRONMENT
Ansible server on CentOS 7.3 and ACI version 3.0 or 3.1
##### SUMMARY
Since ACI 2.3 it is possible to configure EPGs to be part of a Preferred Group. This is a new attribute of the fvAEPg object. EPGs that are part of the Preferred Group can communicate without contracts. This is very convenient for migration scenarios as well as customers that implement ACI for network automation but not for policy.
##### STEPS TO REPRODUCE
##### EXPECTED RESULTS
The module should have a new option to configure.
preferred_group: yes, no
The object to configure is:
fvAEPg.attributes.prefGrMemb and the option is "include" or "exclude".
##### ACTUAL RESULTS
```
```
| null | https://github.com/ansible/ansible/pull/35265 | null | {'base_commit': 'b8e8fb48a84b65e805aecd263ebb7cd303e671ee', 'files': [{'path': 'lib/ansible/modules/network/aci/aci_epg.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [66, 86]}, "(None, 'main', 161)": {'add': [171, 185, 191, 230], 'mod': [196]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/ansible/modules/network/aci/aci_epg.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 4405b109e3abcd197666430708de2881e7cde8da | https://github.com/All-Hands-AI/OpenHands/issues/4280 | enhancement
good first issue
frontend
large effort | Update the frontend to use i18n keys | **What problem or use case are you trying to solve?**
The new UI hardcodes english text throughout the app. In order to support i18n, we should extend our i18n provider and replaced the hardcoded values with the new keys
**Describe the UX of the solution you'd like**
**Do you have thoughts on the technical implementation?**
**Describe alternatives you've considered**
**Additional context**
| null | https://github.com/All-Hands-AI/OpenHands/pull/4464 | null | {'base_commit': '4405b109e3abcd197666430708de2881e7cde8da', 'files': [{'path': 'frontend/src/components/form/custom-input.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 15], 'mod': [21]}}}, {'path': 'frontend/src/components/form/settings-form.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 13, 17, 37], 'mod': [10, 12, 15, 164, 174, 193, 223, 237, 244, 258, 294, 337, 348, 352, 359, 372, 373, 376, 380, 390, 391, 393, 395]}}}, {'path': 'frontend/src/components/modals/AccountSettingsModal.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 11, 25], 'mod': [89, 95, 125, 129]}}}, {'path': 'frontend/src/components/modals/ConnectToGitHubByTokenModal.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 9, 13], 'mod': [32, 33, 38]}}}, {'path': 'frontend/src/components/modals/LoadingProject.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 3, 30], 'mod': [34]}}}, {'path': 'frontend/src/components/modals/connect-to-github-modal.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 10, 18], 'mod': [27, 34, 58, 64]}}}, {'path': 'frontend/src/components/modals/security/Security.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 3, 19], 'mod': [24]}}}, {'path': 'frontend/src/components/modals/security/invariant/Invariant.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [126, 137, 146, 165, 167, 198, 200, 217, 219, 224, 267, 281, 284, 285, 292, 301, 307, 313]}}}, {'path': 'frontend/src/components/project-menu/project-menu-details-placeholder.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 2, 12], 'mod': [15]}}}, {'path': 'frontend/src/components/project-menu/project-menu-details.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 2, 14], 'mod': [35]}}}, {'path': 'frontend/src/i18n/translation.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1519], 'mod': [801]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "0",
"info_type": ""
} | {
"code": [
"frontend/src/components/modals/AccountSettingsModal.tsx",
"frontend/src/components/modals/security/Security.tsx",
"frontend/src/components/form/settings-form.tsx",
"frontend/src/components/modals/security/invariant/Invariant.tsx",
"frontend/src/components/modals/LoadingProject.tsx",
"frontend/src/i18n/translation.json",
"frontend/src/components/modals/ConnectToGitHubByTokenModal.tsx",
"frontend/src/components/project-menu/project-menu-details.tsx",
"frontend/src/components/form/custom-input.tsx",
"frontend/src/components/modals/connect-to-github-modal.tsx",
"frontend/src/components/project-menu/project-menu-details-placeholder.tsx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scikit-learn | scikit-learn | e8a15d544490b3fe80ef77dd995d12de84194d00 | https://github.com/scikit-learn/scikit-learn/issues/7435 | [RFC?] Make cross_val_score output a dict/named tuple. | Two major things here -
- Often I see that only a partial output of `_fit_and_score` is taken for use. It is wasteful to generate and discard arrays. It would rather be much better to generate only the stuff that is required.
- Now that we have more options, like @jnothman says [here](https://github.com/scikit-learn/scikit-learn/pull/7325#issuecomment-246529168) and [here](https://github.com/scikit-learn/scikit-learn/pull/7388#issuecomment-246233650) should we modify the output of `cross_val_score` (and also `_fit_and_score` to be a dict or a named tuple similar to the structure of `cv_results_`? (I think named-tuple is a better choice atleast for `_fit_and_score` as we stack the result of multiple `_fit_and_score` operations via `Parallel` mostly)
If we are changing the output of `cross_val_score`, this would be an ideal time to do it as we don't have to deprecate anything...
@jnothman @amueller @vene @GaelVaroquaux @agramfort
| null | https://github.com/scikit-learn/scikit-learn/pull/7388 | null | {'base_commit': 'e8a15d544490b3fe80ef77dd995d12de84194d00', 'files': [{'path': 'doc/modules/classes.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [225]}}}, {'path': 'doc/modules/cross_validation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [174], 'mod': [189]}}}, {'path': 'doc/modules/grid_search.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [86, 163]}}}, {'path': 'doc/modules/model_evaluation.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [212]}}}, {'path': 'doc/whats_new.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33]}}}, {'path': 'sklearn/metrics/scorer.py', 'status': 'modified', 'Loc': {"(None, 'get_scorer', 211)": {'add': [211, 217]}, "(None, 'check_scoring', 231)": {'mod': [256, 262, 275, 276, 277, 278, 280, 281, 282]}}}, {'path': 'sklearn/metrics/tests/test_score_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 13, 23]}, "('EstimatorWithoutFit', None, 106)": {'mod': [107]}, "('EstimatorWithFit', None, 111)": {'mod': [112]}, "('EstimatorWithFitAndScore', None, 117)": {'mod': [118]}, "('EstimatorWithFitAndPredict', None, 126)": {'mod': [127]}, "(None, 'test_check_scoring', 148)": {'mod': [148, 149, 153, 157, 165, 167, 171, 174, 175, 176]}}}, {'path': 'sklearn/model_selection/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [20, 52]}}}, {'path': 'sklearn/model_selection/_search.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 27, 36]}, "(None, 'fit_grid_point', 271)": {'add': [301], 'mod': [298, 299, 325, 326, 327, 328, 329, 330]}, "('BaseSearchCV', 'score', 402)": {'add': [421], 'mod': [426]}, "('BaseSearchCV', '_store', 615)": {'add': [617, 621]}, "('BaseSearchCV', None, 376)": {'add': [698], 'mod': [687, 688, 689, 690, 692, 693, 694, 695]}, "('GridSearchCV', None, 721)": {'add': [912, 924], 'mod': [750, 751, 752, 753, 754, 804, 805, 806, 807, 860, 896, 897, 902, 905, 908, 921]}, "('RandomizedSearchCV', None, 973)": {'add': [1151, 1163], 'mod': [1015, 1016, 1017, 1018, 1019, 1069, 1070, 1071, 1072, 1132, 1135, 1136, 1141, 1144, 1147, 1160]}, "('BaseSearchCV', '_check_is_fitted', 428)": {'mod': [430, 431, 432, 433]}, "('BaseSearchCV', 'fit', 544)": {'mod': [578, 596, 597, 608, 611, 637, 638, 639, 640, 642, 643, 644, 645, 649, 650, 671, 672, 673, 676, 677, 678, 679, 681, 683, 684, 685]}, "('BaseSearchCV', 'grid_scores_', 698)": {'mod': [705]}}}, {'path': 'sklearn/model_selection/_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 301], 'mod': [6, 7, 27, 32, 33]}, "(None, 'cross_val_score', 36)": {'add': [124], 'mod': [49, 129, 131, 133, 134, 135, 136, 137, 138, 139, 140, 141]}, "(None, '_fit_and_score', 144)": {'add': [192, 225, 233], 'mod': [162, 163, 195, 196, 198, 199, 247, 248, 249, 260, 263, 266, 272]}, "(None, 'validation_curve', 906)": {'add': [1006]}, "(None, '_score', 283)": {'mod': [283, 284, 285, 286, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 298]}, "(None, 'permutation_test_score', 528)": {'mod': [558, 559, 560]}}}, {'path': 'sklearn/model_selection/tests/test_search.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 31, 36, 56, 930, 1036], 'mod': [30]}, "(None, 'test_unsupervised_grid_search', 635)": {'add': [644], 'mod': [639, 640, 641, 642, 643, 648]}, "(None, 'check_cv_results_array_types', 697)": {'add': [698], 'mod': [697, 706]}, "(None, 'test_random_search_cv_results', 792)": {'add': [812], 'mod': [793, 794, 795, 796, 798, 799, 800, 803, 804, 805, 806, 807, 808, 809, 810, 811, 825, 829]}, "(None, 'test_no_refit', 370)": {'mod': [373, 374, 375, 376, 377, 379, 380, 381, 382, 383, 384, 385]}, "(None, 'test_pandas_input', 610)": {'mod': [625, 626]}, "(None, 'check_cv_results_grid_scores_consistency', 717)": {'mod': [718, 719, 720, 721, 722, 723, 724, 725, 726, 727, 728, 729, 730, 731, 732, 733]}, "(None, 'test_grid_search_cv_results', 736)": {'mod': [744, 745, 746, 747, 748, 749, 763, 774, 777, 778]}, "(None, 'test_grid_search_cv_splits_consistency', 1258)": {'mod': [1275, 1276, 1277, 1278, 1279, 1287, 1288]}}}, {'path': 'sklearn/model_selection/tests/test_validation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18, 27, 44, 45, 58, 264]}, "(None, 'test_cross_val_score_score_func', 379)": {'add': [390], 'mod': [389]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/model_selection/_search.py",
"sklearn/metrics/scorer.py",
"sklearn/model_selection/_validation.py",
"sklearn/model_selection/__init__.py"
],
"doc": [
"doc/modules/classes.rst",
"doc/modules/model_evaluation.rst",
"doc/modules/cross_validation.rst",
"doc/whats_new.rst",
"doc/modules/grid_search.rst"
],
"test": [
"sklearn/model_selection/tests/test_validation.py",
"sklearn/metrics/tests/test_score_objects.py",
"sklearn/model_selection/tests/test_search.py"
],
"config": [],
"asset": []
} | 1 | |
huggingface | transformers | a73883ae9ec66cb35a8222f204a5f2fafc326d3f | https://github.com/huggingface/transformers/issues/24100 | [Trainer] Why not use `tqdm`'s `dynamic_ncols=True` option? | ### Feature request
# Problem
Tqdm progress bar is getting ugly when the width of the terminal is shrunk!

It progress bar makes the new line on every update! It is very ugly...
# Solution
Simply add the `dynamic_ncols=True` option to `tqdm`. It is located in `trainer_callbacks.ProgressCallback`.

You can check the progress bar is now dynamically resized when the terminal size is updated.
### Motivation
When I connect `tmux` session with different widths of the terminal, then the `tqdm` printing is getting ugly.
### Your contribution
Please check the PR #24101 | null | https://github.com/huggingface/transformers/pull/24101 | null | {'base_commit': 'a73883ae9ec66cb35a8222f204a5f2fafc326d3f', 'files': [{'path': 'src/transformers/trainer_callback.py', 'status': 'modified', 'Loc': {"('ProgressCallback', 'on_train_begin', 474)": {'mod': [476]}, "('ProgressCallback', 'on_prediction_step', 484)": {'mod': [487]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"src/transformers/trainer_callback.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
nvbn | thefuck | f0f49c1865162fd1eef9199ab895811846516ada | https://github.com/nvbn/thefuck/issues/422 | obsolete | Shell alias clobbers some history lines | I was trying this out, and found that large swathes of my history were missing after running "fuck" a single time. This should _not_ modify history except to insert a command it executes..
| null | https://github.com/nvbn/thefuck/pull/432 | null | {'base_commit': 'f0f49c1865162fd1eef9199ab895811846516ada', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [285, 309]}}}, {'path': 'thefuck/conf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19], 'mod': [27, 29]}, "('Settings', '_val_from_env', 120)": {'mod': [129]}}}, {'path': 'thefuck/types.py', 'status': 'modified', 'Loc': {"('CorrectedCommand', 'run', 273)": {'mod': [281]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"thefuck/types.py",
"thefuck/conf.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | bd8c293a97f7f08989cff1db0d9c32f5a2208b77 | https://github.com/scrapy/scrapy/issues/2518 | AttributeError: 'FeedExporter' object has no attribute 'slot' | I have this simple spider, when I call `scrapy crawl dataspider` it works fine and prints the item in the output :
import json
from scrapy.spiders import Spider
class dataspider(Spider):
name='dataspider'
start_urls=('https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL',)
def parse(self, response):
j=json.loads( response.body.decode('utf-8') )
yield j['matches'][1]
Outputs :
> {'t': 'AAPL', 'n': 'Apple Inc.', 'e': 'NASDAQ', 'id': '22144'}
However as soon as I try to save the item in a file using `scrapy crawl dataspider -o out.json` I get this error :
> AttributeError: 'FeedExporter' object has no attribute 'slot'
Full Traceback is :
```
$ scrapy crawl dataspider -o ./test.json
2017-01-30 14:32:06 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: googlefinance)
2017-01-30 14:32:06 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'googlefinance', 'CONCURRENT_REQUESTS': 100, 'CONCURRENT_REQUESTS_PER_DOMAIN': 100, 'DNS_TIMEOUT': 30, 'DOWNLOAD_TIMEOUT': 30, 'FEED_FORMAT': 'json', 'FEED_URI': './test.json', 'NEWSPIDER_MODULE': 'googlefinance.spiders', 'RETRY_HTTP_CODES': [500, 502, 503, 504, 400, 403, 404, 408], 'RETRY_TIMES': 30, 'SPIDER_MODULES': ['googlefinance.spiders'], 'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; FSL 7.0.6.01001)'}
2017-01-30 14:32:06 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.LogStats']
2017-01-30 14:32:06 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-01-30 14:32:06 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-01-30 14:32:06 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-01-30 14:32:06 [scrapy.core.engine] INFO: Spider opened
2017-01-30 14:32:06 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.open_spider of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 187, in open_spider
uri = self.urifmt % self._get_uri_params(spider)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 262, in _get_uri_params
params[k] = getattr(spider, k)
File "/usr/lib/python3.6/site-packages/scrapy/spiders/__init__.py", line 36, in logger
logger = logging.getLogger(self.name)
File "/usr/lib/python3.6/logging/__init__.py", line 1813, in getLogger
return Logger.manager.getLogger(name)
File "/usr/lib/python3.6/logging/__init__.py", line 1167, in getLogger
raise TypeError('A logger name must be a string')
TypeError: A logger name must be a string
2017-01-30 14:32:06 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-30 14:32:06 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-30 14:32:07 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL> (referer: None)
2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>
{'t': 'AAPL', 'n': 'Apple Inc.', 'e': 'NASDAQ', 'id': '22144'}
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 217, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>
{'t': 'AAPL', 'n': 'APPLE INC CEDEAR(REPR 1/10 SHR)', 'e': 'BCBA', 'id': '640373807586235'}
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 217, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>
{'t': 'AAPL', 'n': 'Apple', 'e': 'SWX', 'id': '268194557752272'}
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 217, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>
{'t': 'AVSPY', 'n': 'NASDAQ OMX Alpha AAPL vs. SPY Index', 'e': 'INDEXNASDAQ', 'id': '3139928'}
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 217, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>
{'t': 'AAPL34', 'n': 'APPLE DRN', 'e': 'BVMF', 'id': '486420404817650'}
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 217, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>
{'t': 'AAPL', 'n': 'APPLE COMPUTER INC', 'e': 'BMV', 'id': '119565461895124'}
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 217, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>
{'t': 'AAPL-EUR', 'n': 'Apple', 'e': 'SWX', 'id': '706336206708362'}
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 217, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.google.com/finance/match?matchtype=matchall&ei=UVlPWNmDEYm_U7SqgvAH&q=AAPL>
{'t': 'AAPL-USD', 'n': 'Apple', 'e': 'SWX', 'id': '1009743014824088'}
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.item_scraped of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 217, in item_scraped
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.core.engine] INFO: Closing spider (finished)
2017-01-30 14:32:07 [scrapy.utils.signal] ERROR: Error caught on signal handler: <bound method FeedExporter.close_spider of <scrapy.extensions.feedexport.FeedExporter object at 0x7ff68de97ef0>>
Traceback (most recent call last):
File "/usr/lib/python3.6/site-packages/twisted/internet/defer.py", line 150, in maybeDeferred
result = f(*args, **kw)
File "/usr/lib/python3.6/site-packages/pydispatch/robustapply.py", line 55, in robustApply
return receiver(*arguments, **named)
File "/usr/lib/python3.6/site-packages/scrapy/extensions/feedexport.py", line 198, in close_spider
slot = self.slot
AttributeError: 'FeedExporter' object has no attribute 'slot'
2017-01-30 14:32:07 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 309,
'downloader/request_count': 1,
'downloader/request_method_count/GET': 1,
'downloader/response_bytes': 761,
'downloader/response_count': 1,
'downloader/response_status_count/200': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2017, 1, 30, 13, 32, 7, 192220),
'item_scraped_count': 8,
'log_count/DEBUG': 10,
'log_count/ERROR': 10,
'log_count/INFO': 7,
'response_received_count': 1,
'scheduler/dequeued': 1,
'scheduler/dequeued/memory': 1,
'scheduler/enqueued': 1,
'scheduler/enqueued/memory': 1,
'start_time': datetime.datetime(2017, 1, 30, 13, 32, 6, 846350)}
2017-01-30 14:32:07 [scrapy.core.engine] INFO: Spider closed (finished)))
```
Any idea what the problem is ? | null | https://github.com/scrapy/scrapy/pull/2433 | null | {'base_commit': 'bd8c293a97f7f08989cff1db0d9c32f5a2208b77', 'files': [{'path': 'scrapy/spiderloader.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "('SpiderLoader', '_load_all_spiders', 26)": {'mod': [28, 29]}}}, {'path': 'tests/test_spiderloader/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "('SpiderLoaderTest', 'test_crawler_runner_loading', 82)": {'add': [91]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"tests/test_spiderloader/__init__.py",
"scrapy/spiderloader.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | bb385394b87e382a34db829bc7ed60d347af73c9 | https://github.com/scikit-learn/scikit-learn/issues/11194 | Build / CI
Blocker | NumPy dev causes test errors due to use of np.matrix | We are getting many warnings like `PendingDeprecationWarning('the matrix subclass is not the recommended way to represent matrices or deal with linear algebra (see https://docs.scipy.org/doc/numpy/user/numpy-for-matlab-users.html). Please adjust your code to use regular ndarray.` using numpy master (see logs at https://travis-ci.org/scikit-learn/scikit-learn/builds/387352026)
Apart from a very long log, this causes test failures where we have used `assert_no_warnings` (which we could now be importing from numpy instead of having our own implementation).
It might be a good idea to remove all uses of np.matrix that raise warnings. On the other hand, we might also consider that `assert_no_warnings` shouldn't be bothered by `PendingDeprecationWarning`s. | null | https://github.com/scikit-learn/scikit-learn/pull/11251 | null | {'base_commit': 'bb385394b87e382a34db829bc7ed60d347af73c9', 'files': [{'path': 'sklearn/ensemble/tests/test_iforest.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8], 'mod': [18]}, "(None, 'test_iforest_error', 91)": {'mod': [108, 109]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [
"sklearn/ensemble/tests/test_iforest.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 96da9525043f78aca4544d01761b13b2140e9ae6 | https://github.com/yt-dlp/yt-dlp/issues/9825 | good first issue
site-bug
patch-available | [cbc.ca] "unable to extract OpenGraph description" | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
United States
### Provide a description that is worded well enough to be understood
Video from CBC's site will not download, throws an error saying "unable to extract OpenGraph description", then says it's finished downloading the playlist (but downloaded no video files).
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.cbc.ca/player/play/video/1.3594815']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.04.28.232723 from yt-dlp/yt-dlp-nightly-builds [ac817bc83] (pip)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 2023-03-02-git-814178f926-full_build-www.gyan.dev (setts), ffprobe 2023-03-02-git-814178f926-full_build-www.gyan.dev, phantomjs 2.1.1
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2022.06.15, mutagen-1.46.0, requests-2.31.0, sqlite3-3.40.1, urllib3-2.2.1, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1810 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.04.28.232723 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.04.28.232723 from yt-dlp/yt-dlp-nightly-builds)
[generic] Extracting URL: https://www.cbc.ca/player/play/video/1.3594815
[generic] 1: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 1: Extracting information
[debug] Looking for embeds
[debug] Identified a twitter:player iframe
[cbc.ca] Extracting URL: https://www.cbc.ca/i/phoenix/player/syndicate/?autoPlay=true&sourceId=1.3594815
[cbc.ca] syndicate: Downloading webpage
WARNING: [cbc.ca] unable to extract OpenGraph description; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
[download] Downloading playlist: CBC Player
[cbc.ca] Playlist CBC Player: Downloading 0 items
[download] Finished downloading playlist: CBC Player
```
| null | https://github.com/yt-dlp/yt-dlp/pull/9866 | null | {'base_commit': '96da9525043f78aca4544d01761b13b2140e9ae6', 'files': [{'path': 'yt_dlp/extractor/cbc.py', 'status': 'modified', 'Loc': {"('CBCPlayerIE', None, 152)": {'add': [279], 'mod': [154]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"yt_dlp/extractor/cbc.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
zylon-ai | private-gpt | 77447e50c0b8143edcf34896af80dd58925582f9 | https://github.com/zylon-ai/private-gpt/issues/2 | TypeError: generate() got an unexpected keyword argument 'new_text_callback' | /privateGPT/gpt4all_j.py", line 152, in _call
text = self.client.generate(
TypeError: generate() got an unexpected keyword argument 'new_text_callback' | null | https://github.com/zylon-ai/private-gpt/pull/3 | null | {'base_commit': '77447e50c0b8143edcf34896af80dd58925582f9', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | 1 | |
sherlock-project | sherlock | 9db8c213ffdad873380c9de41c142923ba0dc260 | https://github.com/sherlock-project/sherlock/issues/1366 | enhancement | Add xlsx Export | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Put x into all boxes (like this [x]) once you have completed what they say.
Make sure complete everything in the checklist.
-->
- [x] I'm reporting a feature request
- [x] I've checked for similar feature requests including closed ones
## Description
<!--
Provide a detailed description of the feature you would like Sherlock to have
-->
WRITE DESCRIPTION HERE
Add an option to export the result on xlsx file type.
| null | https://github.com/sherlock-project/sherlock/pull/1367 | null | {'base_commit': '9db8c213ffdad873380c9de41c142923ba0dc260', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [24]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}}}, {'path': 'sherlock/sherlock.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "(None, 'main', 477)": {'add': [508, 718]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sherlock/sherlock.py"
],
"doc": [],
"test": [],
"config": [
".gitignore",
"requirements.txt"
],
"asset": []
} | 1 |
geekan | MetaGPT | 8b209d4e17ad7dfc1ad7a80505eac42f71228734 | https://github.com/geekan/MetaGPT/issues/1539 | ollama llm vision api call error: async for raw_chunk in stream_resp: TypeError: 'async for' requires an object with __aiter__ method, got bytes | **Bug description**
when running any vision llm call (like example/llm_vision.py)
there seems to be an issue with async def _achat_completion_stream(self, messages: list[dict], timeout: int = USE_CONFIG_TIMEOUT) -> str: method
**Bug solved method**
No solve yet
**Environment information**
system metal, llm ollama, Python 3.10.13
- LLM type and model name: ollama, llava latest
- System version:
- Python version: Python 3.10.13
- MetaGPT version or branch: main
- packages version: /
- installation method: from source
**Screenshots or logs**
.....
do = self.iter(retry_state=retry_state)
return fut.result()
return self.__get_result()
raise self._exception
result = await fn(*args, **kwargs)
return await self._achat_completion_stream(messages, timeout=self.get_timeout(timeout))
async for raw_chunk in stream_resp: (HERE)
TypeError: 'async for' requires an object with __aiter__ method, got bytes
| null | https://github.com/geekan/MetaGPT/pull/1544 | null | {'base_commit': '8b209d4e17ad7dfc1ad7a80505eac42f71228734', 'files': [{'path': 'examples/llm_vision.py', 'status': 'modified', 'Loc': {"(None, 'main', 12)": {'mod': [18, 19]}}}, {'path': 'metagpt/configs/llm_config.py', 'status': 'modified', 'Loc': {"('LLMType', None, 18)": {'mod': [29]}, "('LLMConfig', 'check_llm_key', 101)": {'mod': [107]}}}, {'path': 'metagpt/provider/general_api_base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15]}, "('OpenAIResponse', None, 123)": {'mod': [124]}, "('APIRequestor', None, 227)": {'mod': [323, 324, 325, 326, 327, 328, 329, 330, 331, 332, 333, 334, 335, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347, 348, 349, 350, 352, 353, 354, 355, 356, 357, 358, 359, 360, 361, 362, 363, 364]}, "('APIRequestor', 'request_headers', 423)": {'mod': [442]}}}, {'path': 'metagpt/provider/general_api_requestor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6, 12]}, "(None, 'parse_stream_helper', 15)": {'mod': [15, 18, 23, 24]}, "('GeneralAPIRequestor', None, 38)": {'mod': [40, 53, 54, 55]}, "('GeneralAPIRequestor', '_interpret_response_line', 53)": {'mod': [57]}, "('GeneralAPIRequestor', '_interpret_response', 59)": {'mod': [61, 62, 66, 67, 68, 72]}, "('GeneralAPIRequestor', '_interpret_async_response', 80)": {'mod': [82, 87, 89, 90, 91, 94, 98, 101]}}}, {'path': 'metagpt/provider/ollama_api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 15], 'mod': [11]}, "('OllamaLLM', '__init__', 22)": {'add': [29], 'mod': [23, 26]}, "('OllamaLLM', '_achat_completion_stream', 76)": {'add': [90, 91, 92, 109], 'mod': [77, 78, 79, 80, 81, 82, 83, 84, 86, 87, 88, 89, 95, 96, 99]}, "('OllamaLLM', None, 17)": {'mod': [36, 37, 38, 40, 41, 42, 43, 44, 49, 50, 51]}, "('OllamaLLM', '_achat_completion', 53)": {'mod': [54, 55, 56, 57, 58, 59, 60, 63, 64, 65, 68, 69, 70, 71]}}}, {'path': 'tests/metagpt/provider/test_ollama_api.py', 'status': 'modified', 'Loc': {"('Iterator', 'mock_ollama_arequest', 26)": {'add': [30], 'mod': [32, 33, 34, 36]}, '(None, None, None)': {'mod': [6, 10]}, "(None, 'mock_ollama_arequest', 23)": {'mod': [26, 40]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"examples/llm_vision.py",
"metagpt/configs/llm_config.py",
"metagpt/provider/general_api_requestor.py",
"metagpt/provider/ollama_api.py",
"metagpt/provider/general_api_base.py"
],
"doc": [],
"test": [
"tests/metagpt/provider/test_ollama_api.py"
],
"config": [],
"asset": []
} | 1 | |
3b1b | manim | dbdd7996960ba46ed044a773290b02f17478c760 | https://github.com/3b1b/manim/issues/1065 | Example_scenes.py run problem question | I was able to get the install success thanks to help. I ran the example_scenes.py file and have the results below. I am now also going through https://talkingphysics.wordpress.com/2019/01/08/getting-started-animating-with-manim-and-python-3-7/ and have similar errors when running the first run python -m manim pymanim_tutorial_P37.py Shapes -pl. So I am trying to crawl before walking and would like to get through example_scenes and first tutorial .py run with success so any help is appreciated.
C:\Users\Admin\Desktop\manim-master>python ./manim.py example_scenes.py SquareTo
Circle -pl
Media will be written to ./media\. You can change this behavior with the --media
_dir flag.
[concat @ 0000000000375a40] Impossible to open 'CC:/Users/Admin/Desktop/manim-ma
ster/media/videos/example_scenes/480p15/partial_movie_files/SquareToCircle/00000
.mp4'
C:\Users\Admin\Desktop\manim-master\media\videos\example_scenes\480p15\partial_m
ovie_files\SquareToCircle\partial_movie_file_list.txt: Protocol not found
Did you mean file:C:\Users\Admin\Desktop\manim-master\media\videos\example_scene
s\480p15\partial_movie_files\SquareToCircle\partial_movie_file_list.txt?
File ready at C:\Users\Admin\Desktop\manim-master\media\videos\example_scenes\48
0p15\SquareToCircle.mp4
Played 3 animations
Traceback (most recent call last):
File "C:\Users\Admin\Desktop\manim-master\manimlib\extract_scene.py", line 156
, in main
open_file_if_needed(scene.file_writer, **config)
File "C:\Users\Admin\Desktop\manim-master\manimlib\extract_scene.py", line 35,
in open_file_if_needed
os.startfile(file_path)
FileNotFoundError: [WinError 2] The system cannot find the file specified: 'C:\\
Users\\Admin\\Desktop\\manim-master\\media\\videos\\example_scenes\\480p15\\Squa
reToCircle.mp4'
| null | https://github.com/3b1b/manim/pull/1057 | null | {'base_commit': 'dbdd7996960ba46ed044a773290b02f17478c760', 'files': [{'path': 'manimlib/scene/scene_file_writer.py', 'status': 'modified', 'Loc': {"('SceneFileWriter', 'combine_movie_files', 253)": {'mod': [289]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"manimlib/scene/scene_file_writer.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
yt-dlp | yt-dlp | c84aeac6b5695e7e1ac629d17fc51eb68ab91bae | https://github.com/yt-dlp/yt-dlp/issues/502 | external issue | [youtube] YouTube serving erroneous DASH Manifest VP9 formats | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.07.07. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running yt-dlp version **2021.07.07**
- [x] I've checked that all provided URLs are alive and playable in a browser (but with condition, see below)
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
```
$ yt-dlp V_h3Z40AAtw -F
[youtube] V_h3Z40AAtw: Downloading webpage
[youtube] V_h3Z40AAtw: Downloading MPD manifest
[info] Available formats for V_h3Z40AAtw:
ID EXT RESOLUTION FPS | FILESIZE TBR PROTO | VCODEC VBR ACODEC ABR ASR NOTE
--- ---- ---------- --- - --------- ----- ----- - ----------- ----- --------- ---- ------- ---------------------------------------
139 m4a audio only | 1.52MiB 50k dash | mp4a.40.5 50k 22050Hz DASH audio, m4a_dash, 22050Hz
140 m4a audio only | 4.02MiB 129k https | mp4a.40.2 129k 44100Hz audio_quality_medium, m4a_dash, 44100Hz
160 mp4 256x144 30 | 108k dash | avc1.4d400b 108k DASH video, mp4_dash
278 webm 256x144 30 | 95k dash | vp9 95k DASH video, webm_dash
133 mp4 426x240 30 | 242k dash | avc1.4d400c 242k DASH video, mp4_dash
242 webm 426x240 30 | 220k dash | vp9 220k DASH video, webm_dash
134 mp4 640x360 30 | 19.25MiB 620k https | avc1.4d401e 620k 360p, mp4_dash
18 mp4 640x360 30 | 22.68MiB 730k https | avc1.42001E 730k mp4a.40.2 0k 44100Hz 360p, 44100Hz
243 webm 640x360 30 | 405k dash | vp9 405k DASH video, webm_dash
135 mp4 854x480 30 | 1155k dash | avc1.4d400c 1155k DASH video, mp4_dash
244 webm 854x480 30 | 752k dash | vp9 752k DASH video, webm_dash
136 mp4 1280x720 30 | 69.87MiB 2251k https | avc1.4d401f 2251k 720p, mp4_dash
22 mp4 1280x720 30 | 2380k https | avc1.64001F 2380k mp4a.40.2 0k 44100Hz 720p, 44100Hz
247 webm 1280x720 30 | 1505k dash | vp9 1505k DASH video, webm_dash
248 webm 1920x1080 30 | 2646k dash | vp9 2646k DASH video, webm_dash
$ yt-dlp -v V_h3Z40AAtw
[debug] Command-line config: ['-v', 'V_h3Z40AAtw']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, pref UTF-8
[debug] yt-dlp version 2021.07.13.1626134551 (zip)
[debug] Python version 3.9.6 (CPython 64bit) - Linux-5.8.0-41-generic-x86_64-with-glibc2.32
[debug] exe versions: ffmpeg 4.3.1, ffprobe 4.3.1, rtmpdump 2.4
[debug] Proxy map: {}
[debug] [youtube] Extracting URL: V_h3Z40AAtw
[youtube] V_h3Z40AAtw: Downloading webpage
[youtube] [debug] Fetching webpage from https://www.youtube.com/watch?v=V_h3Z40AAtw&bpctr=9999999999&has_verified=1
[youtube] V_h3Z40AAtw: Downloading MPD manifest
[youtube] [debug] Fetching webpage from https://manifest.googlevideo.com/api/manifest/dash/expire/1626251414/ei/NkzuYLz-Ap-0s8IPlf2YyAo/ip/2001%3A19f0%3A7001%3A13a1%3A5400%3A3ff%3Afe11%3A205f/id/57f877678d0002dc/source/youtube/requiressl/yes/playback_host/r2---sn-oguelne7.googlevideo.com/mh/t1/mm/31%2C29/mn/sn-oguelne7%2Csn-oguesnzz/ms/au%2Crdu/mv/m/mvi/2/pl/55/tx/24027688/txs/24027687%2C24027688%2C24027689%2C24027690/hfr/all/as/fmp4_audio_clear%2Cwebm_audio_clear%2Cwebm2_audio_clear%2Cfmp4_sd_hd_clear%2Cwebm2_sd_hd_clear/initcwndbps/1267500/vprv/1/mt/1626229496/fvip/2/keepalive/yes/fexp/24001373%2C24007246/itag/0/sparams/expire%2Cei%2Cip%2Cid%2Csource%2Crequiressl%2Ctx%2Ctxs%2Chfr%2Cas%2Cvprv%2Citag/sig/AOq0QJ8wRAIgel_8rJx7O1ChqaQTDiBI5cysHmZ_4uCmgCWN_kPxy8cCIDfgAFVDYl5WO7a1gLifSDw6vBfjELblxSgkOodOm1am/lsparams/playback_host%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps/lsig/AG3C_xAwRAIgJVqT1xhW2KIhXVIj6cJRomDQ7-UOvq8yyC_J5r7ksfMCIBG0VgIJSIlNe49rl6ty6WA_DuH2AhKJvLOpq8fUBojv
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] V_h3Z40AAtw: Downloading 1 format(s): 248+140
[debug] locking youtube_V_h3Z40AAtw.lock
[debug] Invoking downloader on "https://manifest.googlevideo.com/api/manifest/dash/expire/1626251414/ei/NkzuYLz-Ap-0s8IPlf2YyAo/ip/2001%3A19f0%3A7001%3A13a1%3A5400%3A3ff%3Afe11%3A205f/id/57f877678d0002dc/source/youtube/requiressl/yes/playback_host/r2---sn-oguelne7.googlevideo.com/mh/t1/mm/31%2C29/mn/sn-oguelne7%2Csn-oguesnzz/ms/au%2Crdu/mv/m/mvi/2/pl/55/tx/24027688/txs/24027687%2C24027688%2C24027689%2C24027690/hfr/all/as/fmp4_audio_clear%2Cwebm_audio_clear%2Cwebm2_audio_clear%2Cfmp4_sd_hd_clear%2Cwebm2_sd_hd_clear/initcwndbps/1267500/vprv/1/mt/1626229496/fvip/2/keepalive/yes/fexp/24001373%2C24007246/itag/0/sparams/expire%2Cei%2Cip%2Cid%2Csource%2Crequiressl%2Ctx%2Ctxs%2Chfr%2Cas%2Cvprv%2Citag/sig/AOq0QJ8wRAIgel_8rJx7O1ChqaQTDiBI5cysHmZ_4uCmgCWN_kPxy8cCIDfgAFVDYl5WO7a1gLifSDw6vBfjELblxSgkOodOm1am/lsparams/playback_host%2Cmh%2Cmm%2Cmn%2Cms%2Cmv%2Cmvi%2Cpl%2Cinitcwndbps/lsig/AG3C_xAwRAIgJVqT1xhW2KIhXVIj6cJRomDQ7-UOvq8yyC_J5r7ksfMCIBG0VgIJSIlNe49rl6ty6WA_DuH2AhKJvLOpq8fUBojv"
[dashsegments] Total fragments: 50
[download] Destination: Sweet Candy ②-V_h3Z40AAtw.f248.webm
[download] Got server HTTP error: HTTP Error 404: Not Found. Retrying fragment 1 (attempt 1 of 10) ...
^C[debug] unlocking youtube_V_h3Z40AAtw.lock
ERROR: Interrupted by user
```
<!--
Do not remove the above ```
-->
## Description
[link to video](https://youtu.be/V_h3Z40AAtw)
The video itself plays on browser, and doesn't have 1080p as you can see it.
But yt-dlp (and youtube-dl) reports 1080p format, which possibly doesn't exist on the server. (format `248` on the video fails to download all segments.)
Resolutions shown in webpage here:

__Edit:__ Tested web and android clients, some locations (JP, Vultr JP, OCI US?), with cookies or not, but all of them has this "ghosty" format | null | https://github.com/yt-dlp/yt-dlp/pull/536 | null | {'base_commit': 'c84aeac6b5695e7e1ac629d17fc51eb68ab91bae', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1340, 1341]}}}, {'path': 'yt_dlp/downloader/youtube_live_chat.py', 'status': 'modified', 'Loc': {"('YoutubeLiveChatFD', 'download_and_parse_fragment', 111)": {'mod': [119]}, "('YoutubeLiveChatFD', 'real_download', 22)": {'mod': [149, 158, 186]}}}, {'path': 'yt_dlp/extractor/youtube.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [35, 39, 42], 'mod': [31, 34, 38]}, "('YoutubeBaseInfoExtractor', None, 68)": {'add': [394, 404], 'mod': [423, 424, 486, 522, 530, 531, 532]}, "('YoutubeIE', None, 756)": {'add': [1617, 1659], 'mod': [1125, 1290, 1297, 1298, 1299, 1655, 1661, 1862, 1863, 1864, 1865, 2290, 2291, 2292, 2294, 2296, 2297, 2298, 2299, 2301, 2302, 2303, 2304, 2306, 2307, 2308, 2309, 2310, 2311, 2312, 2313, 2314, 2315]}, "('YoutubeIE', '_extract_player_url', 1693)": {'add': [1698], 'mod': [1695]}, "('YoutubeIE', '_get_video_info_params', 2271)": {'add': [2279]}, "('YoutubeIE', '_real_extract', 2290)": {'add': [2574, 2600, 2611, 2642, 2829], 'mod': [2317, 2318, 2320, 2321, 2322, 2323, 2324, 2325, 2326, 2327, 2329, 2330, 2331, 2332, 2333, 2334, 2335, 2336, 2337, 2339, 2340, 2341, 2342, 2343, 2345, 2346, 2347, 2348, 2349, 2350, 2352, 2353, 2354, 2355, 2356, 2357, 2358, 2359, 2360, 2361, 2362, 2363, 2364, 2365, 2367, 2368, 2369, 2370, 2371, 2372, 2373, 2374, 2376, 2377, 2378, 2379, 2380, 2381, 2382, 2383, 2384, 2385, 2386, 2387, 2388, 2389, 2390, 2391, 2392, 2393, 2394, 2395, 2396, 2397, 2398, 2400, 2401, 2402, 2403, 2404, 2405, 2406, 2407, 2408, 2409, 2410, 2411, 2412, 2413, 2414, 2415, 2417, 2418, 2419, 2420, 2421, 2422, 2423, 2424, 2425, 2426, 2427, 2429, 2430, 2432, 2433, 2434, 2435, 2436, 2437, 2438, 2440, 2441, 2442, 2444, 2445, 2446, 2447, 2448, 2449, 2450, 2451, 2452, 2454, 2455, 2456, 2457, 2458, 2459, 2460, 2461, 2462, 2463, 2464, 2465, 2466, 2467, 2468, 2470, 2471, 2472, 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481, 2482, 2483, 2484, 2485, 2486, 2487, 2488, 2489, 2490, 2491, 2492, 2493, 2494, 2496, 2498, 2507, 2508, 2509, 2510, 2511, 2557, 2588, 2591, 2594, 2603, 2619, 2622, 2625, 2626, 2627, 2628, 2629, 2630, 2632, 2634, 2639, 2645, 2663, 2664, 2665, 2666, 2667, 2668, 2669, 2670, 2671, 2672, 2673, 2676, 2677, 2678, 2679, 2680, 2681, 2682, 2683, 2684, 2685, 2686, 2687, 2688, 2689, 2690, 2691, 2692, 2728, 2730, 2734, 2737, 2738, 2740, 2742, 2749, 2750, 2753, 2754, 2755, 2832, 2946, 2947, 2979, 2980, 2981, 2989, 2990, 2993, 2994, 3007, 3009]}, "('YoutubePlaylistIE', None, 4145)": {'add': [4167, 4195], 'mod': [4190]}, "('YoutubeSearchURLIE', None, 4379)": {'add': [4387]}, "('YoutubeBaseInfoExtractor', '_call_api', 470)": {'mod': [476]}, "('YoutubeBaseInfoExtractor', '_extract_identity_token', 493)": {'mod': [494]}, "('YoutubeBaseInfoExtractor', '_generate_api_headers', 530)": {'mod': [535, 536, 541]}, "('YoutubeIE', '_comment_entries', 2040)": {'mod': [2125]}, "('YoutubeTabIE', None, 3014)": {'mod': [3290]}, "('YoutubeTabIE', '_entries', 3639)": {'mod': [3696]}, "('YoutubeTabIE', '_extract_from_tabs', 3779)": {'mod': [3846]}, "('YoutubeTabIE', '_extract_mix_playlist', 3854)": {'mod': [3856, 3857]}, "('YoutubeTabIE', '_reload_with_unavailable_videos', 3950)": {'mod': [3974, 3975]}, "('YoutubeTabIE', '_extract_webpage', 3989)": {'mod': [4002]}, "('YoutubeSearchIE', '_get_n_results', 4367)": {'mod': [4369]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"yt_dlp/extractor/youtube.py",
"yt_dlp/downloader/youtube_live_chat.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | 59ca996eb1b510cef7ae60a179c36ea7f353f71e | https://github.com/deepfakes/faceswap/issues/197 | Face on angles >180 degrees not recognized on extraction | Hi guys thanks for the amazing work here!
I have been following the landmark detection dialogue #187 and have tried both hog and cnn with both face-alignment and face_recognition. I got face-alignment with cnn working great, with pytorch on win10 now. However, I noticed that none of the above are able to reliably able to identify faces where the face is pointing downwards, for example with the forehead pointing from 6 to 9 o'clock.
I think all these algorithms tend to look for eyes being above the level of the mouth.
For example [Image Removed) this image would not be detected and extracted by hog or cnn in face-alignment or face_recognition.
However by rotating it 90 deg to the right, so that the forehead is pointing up, makes it extracted.
Would it be possible to have an argument set to resend the image for alignment but rotated if it was not caught the first time?
I am ok with python and novice with git but could maybe even give it a try if someone points me to where the frame is passed for extraction.
Thanks! | null | https://github.com/deepfakes/faceswap/pull/253 | null | {'base_commit': '59ca996eb1b510cef7ae60a179c36ea7f353f71e', 'files': [{'path': 'lib/cli.py', 'status': 'modified', 'Loc': {"('DirectoryProcessor', 'get_faces_alignments', 140)": {'add': [144]}, '(None, None, None)': {'mod': [9]}, "('DirectoryProcessor', None, 29)": {'mod': [157]}, "('DirectoryProcessor', 'get_faces', 157)": {'mod': [159]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {"('DetectedFace', '__init__', 10)": {'add': [11]}, "(None, 'detect_faces', 3)": {'mod': [3, 7]}, "('DetectedFace', None, 9)": {'mod': [10]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 32]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}, "('ConvertImage', 'convert', 216)": {'mod': [229, 230]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {"('ExtractTrainingData', 'add_optional_arguments', 22)": {'add': [68]}, "('ExtractTrainingData', None, 12)": {'add': [100]}, "('ExtractTrainingData', 'handleImage', 101)": {'add': [104, 117], 'mod': [102, 106, 107]}, '(None, None, None)': {'mod': [7]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"lib/utils.py",
"lib/faces_detect.py",
"lib/cli.py",
"scripts/convert.py",
"scripts/extract.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | 9438672b1cf80602fc93536670d9601d655377f5 | https://github.com/deepfakes/faceswap/issues/239 | EOL Error when training | After pulling the latest commit today I am now getting the below error when trying to train.
**Command**
python faceswap.py train -A "D:\Fakes\Data\Dataset_A\Faces" -B "D:\Fakes\Data\Dataset_B\Faces" -m "D:\Fakes\Model" -p -s 100 -bs 80 -t LowMem
**Error**
Traceback (most recent call last):
File "faceswap.py", line 12, in <module>
from scripts.convert import ConvertImage
File "D:\Fakes\faceswap\scripts\convert.py", line 100
help="Erosion kernel size. (Masked converter only). Positive values apply erosion which reduces the edge \
^
SyntaxError: EOL while scanning string literal
| null | null | https://github.com/deepfakes/faceswap/commit/9438672b1cf80602fc93536670d9601d655377f5 | {'base_commit': '9438672b1cf80602fc93536670d9601d655377f5', 'files': [{'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "commit",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"scripts/convert.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | null | |
huggingface | transformers | 5f7a07c0c867abedbb3ebf135915eeee56add24b | https://github.com/huggingface/transformers/issues/9326 | Issue with 'char_to_token()' function of DistilBertTokenizerFast | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 4.0.1
- Platform: Google Colab
- Python version: 3.8
- PyTorch version (GPU?):
- Tensorflow version (GPU?): 2.4.0
- Using GPU in script?: No
- Using distributed or parallel set-up in script?: NA
### Who can help: **tokenizers: @mfuntowicz**
## Information
Model I am using DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased') to tokenize Squad 2.0 train and validate dataset.
The problem arises when using below code snippet to add_token_positions (start and end position) as below from https://huggingface.co/transformers/custom_datasets.html:
_def add_token_positions(encodings, answers):
start_positions = []
end_positions = []
for i in range(len(answers)):
start_positions.append(**encodings.char_to_token(i, answers[i]['answer_start'])**)
end_positions.append(**encodings.char_to_token(i, answers[i]['answer_end'] - 1**))
# if None, the answer passage has been truncated
if start_positions[-1] is None:
start_positions[-1] = tokenizer.model_max_length
if end_positions[-1] is None:
end_positions[-1] = tokenizer.model_max_length
encodings.update({'start_positions': start_positions, 'end_positions': end_positions})
add_token_positions(train_encodings, train_answers)
add_token_positions(val_encodings, val_answers)_
The tasks I am working on is:
*Training model on SQUaD 2.0 using code given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0
## To reproduce
Steps to reproduce the behavior:
1. Follow the steps given on https://huggingface.co/transformers/custom_datasets.html#question-answering-with-squad-2-0 and then verify start and end position outcome using below code snippet in Expected behavior
<!-- If you have code snippets, error messages, stack traces please provide them here as well.
Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
Do not use screenshots, as they are hard to read and (more importantly) don't allow others to copy-and-paste your code.-->
## Expected behavior:
- Start and End position are being defined using above code snippet which will be provided as training/validation data to model but end position is not derived as correct value due to some issue with char_to_token() function which is used to find out end position.
- Please find below snippet for verification that answer using start and end position after tokenization is not matching with actual answer.
- So the training data which is being fed to model after tokenization is incorrect
idx=8
print(f'Actual context: {train_contexts[idx]}')
print(f'Actual question: {train_questions[idx]}')
print(f"Actual answer: {train_answers[idx]['text']}")
start_position=train_encodings.char_to_token(idx,train_answers[idx]['answer_start'])
end_position =train_encodings.char_to_token(idx,train_answers[idx]['answer_end'])
print(f"Answer after tokenization: {tokenizer.convert_ids_to_tokens(train_encodings['input_ids'][idx][start_position:end_position])}")
OUTPUT:
**Actual context:** Beyoncé Giselle Knowles-Carter (/biːˈjɒnseɪ/ bee-YON-say) (born September 4, 1981) is an American singer, songwriter, record producer and actress. Born and raised in Houston, Texas, she performed in various singing and dancing competitions as a child, and rose to fame in the late 1990s as lead singer of R&B girl-group Destiny's Child. Managed by her father, Mathew Knowles, the group became one of the world's best-selling girl groups of all time. Their hiatus saw the release of Beyoncé's debut album, Dangerously in Love (2003), which established her as a solo artist worldwide, earned five Grammy Awards and featured the Billboard Hot 100 number-one singles "Crazy in Love" and "Baby Boy".
**Actual question:** When did Beyoncé rise to fame?
**Actual answer:** late 1990s
**Answer after tokenization:** ['late', '1990s', 'as', 'lead', 'singer', 'of', 'r', '&', 'b', 'girl', '-', 'group', 'destiny', "'", 's', 'child', '.', 'managed', 'by', 'her', 'father', ',', 'mathew', 'knowles', ',', 'the', 'group', 'became', 'one', 'of', 'the', 'world', "'", 's', 'best', '-', 'selling', 'girl', 'groups', 'of', 'all', 'time', '.', 'their', 'hiatus', 'saw', 'the', 'release', 'of', 'beyonce', "'", 's', 'debut', 'album', ',', 'dangerously', 'in', 'love', '(', '2003', ')', ',', 'which', 'established', 'her', 'as', 'a', 'solo', 'artist', 'worldwide', ',', 'earned', 'five', 'grammy', 'awards', 'and', 'featured', 'the', 'billboard', 'hot', '100', 'number', '-', 'one', 'singles', '"', 'crazy', 'in', 'love', '"', 'and', '"', 'baby', 'boy', '"', '.', '[SEP]', 'when', 'did', 'beyonce', 'rise', 'to', 'fame', '?', '[SEP]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]', '[PAD]'] | null | https://github.com/huggingface/transformers/pull/9378 | null | {'base_commit': '5f7a07c0c867abedbb3ebf135915eeee56add24b', 'files': [{'path': 'docs/source/custom_datasets.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [564], 'mod': [561, 562, 566]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/source/custom_datasets.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
zylon-ai | private-gpt | 2940f987c0996fe083d1777bdc117fc28c576c08 | https://github.com/zylon-ai/private-gpt/issues/1007 | bug
primordial | running ingest throws attribute error module 'chromadb' has no attribute 'PersistentClient' | ```
(privategpt-py3.11) (base) ➜ privateGPT git:(main) ✗ python ingest.py
Traceback (most recent call last):
File "/Volumes/Projects/privateGPT/ingest.py", line 169, in <module>
main()
File "/Volumes/Projects/privateGPT/ingest.py", line 146, in main
chroma_client = chromadb.PersistentClient(settings=CHROMA_SETTINGS , path=persist_directory)
^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: module 'chromadb' has no attribute 'PersistentClient'
```
.env file:
```
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=all-MiniLM-L6-v2
MODEL_N_CTX=1000
MODEL_N_BATCH=8
TARGET_SOURCE_CHUNKS=4
```
**Environment (please complete the following information):**
- OS / hardware: macOS 13.5.1
- Python version 3.11.5
any idea what's wrong here or how to solve it? | null | https://github.com/zylon-ai/private-gpt/pull/1015 | null | {'base_commit': '2940f987c0996fe083d1777bdc117fc28c576c08', 'files': [{'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [11, 12, 13, 14, 15, 16, 18, 19, 23]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"pyproject.toml"
],
"asset": []
} | 1 |
geekan | MetaGPT | bbb9645f7c60c35177922d10ccc7ed4b90d261c3 | https://github.com/geekan/MetaGPT/issues/979 | `utils.text.reduce_message_length` Not reducing text length | **Bug description**
I come across this issue.
```python
File "/Users/azure/Documents/Workspace/Datasci/lib/python3.10/site-packages/metagpt/utils/text.py", line 31, in reduce_message_length
raise RuntimeError("fail to reduce message length")
RuntimeError: fail to reduce message length
```
**Bug solved method**
Digging into the code, I assume `utils.text.reduce_message_length()` only check if the token is short enough.
If it's too long, it simply raise exception, instead of shorten it.
Following it the code in `utils.text.reduce_message_length()`
```python
def reduce_message_length(
msgs: Generator[str, None, None],
model_name: str,
system_text: str,
reserved: int = 0,
) -> str:
max_token = TOKEN_MAX.get(model_name, 2048) - count_string_tokens(system_text, model_name) - reserved
for msg in msgs:
if count_string_tokens(msg, model_name) < max_token or model_name not in TOKEN_MAX:
return msg
raise RuntimeError("fail to reduce message length")
```
- LLM type and model name:
- System version:MetaGPT 0.7.4
- Python version: Python 3.10.13
Is it a feature that is not implemented yet, or I can try to create a PR to fix it | null | https://github.com/geekan/MetaGPT/pull/986 | null | {'base_commit': 'bbb9645f7c60c35177922d10ccc7ed4b90d261c3', 'files': [{'path': 'metagpt/actions/research.py', 'status': 'modified', 'Loc': {"('CollectLinks', 'run', 94)": {'mod': [137]}}}, {'path': 'metagpt/config2.py', 'status': 'modified', 'Loc': {"('Config', 'default', 88)": {'mod': [95]}}}, {'path': 'metagpt/utils/token_counter.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [142, 158, 161], 'mod': [144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 157]}, "(None, 'count_message_tokens', 182)": {'mod': [182, 212, 213]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"metagpt/actions/research.py",
"metagpt/config2.py",
"metagpt/utils/token_counter.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scikit-learn | scikit-learn | c56bce482db698c7c7e7b583b8b2e08a211eb48b | https://github.com/scikit-learn/scikit-learn/issues/10463 | API | Toward a consistent API for NearestNeighbors & co | ### Estimators relying on `NearestNeighbors` (NN), and their related params:
`params = (algorithm, leaf_size, metric, p, metric_params, n_jobs)`
**sklearn.neighbors:**
- `NearestNeighbors(n_neighbors, radius, *params)`
- `KNeighborsClassifier(n_neighbors, *params)`
- `KNeighborsRegressor(n_neighbors, *params)`
- `RadiusNeighborsClassifier(radius, *params)`
- `RadiusNeighborsRegressor(radius, *params)`
- `LocalOutlierFactor(n_neighbors, *params)`
- ~`KernelDensity(algorithm, metric, leaf_size, metric_params)`
**sklearn.manifold:**
- `TSNE(method="barnes_hut", metric)`
- `Isomap(n_neighbors, neighbors_algorithm, n_jobs)`
- `LocallyLinearEmbedding(n_neighbors, neighbors_algorithm, n_jobs)`
- `SpectralEmbedding(affinity='nearest_neighbors', n_neighbors, n_jobs)`
**sklearn.cluster:**
- `SpectralClustering(affinity='nearest_neighbors', n_neighbors, n_jobs)`
- `DBSCAN(eps, *params)`
### How do they call `NearestNeighbors` ?
- Inherit from `NeighborsBase._fit`: NearestNeighbors, KNeighborsClassifier, KNeighborsRegressor, RadiusNeighborsClassifier, RadiusNeighborsRegressor, LocalOutlierFactor
- Call `BallTree/KDTree(X)`: KernelDensity
- Call `kneighbors_graph(X)`: SpectralClustering, SpectralEmbedding
- Call `NearestNeighbors().fit(X)`: TSNE, DBSCAN, Isomap, kneighbors_graph
### Do they handle other form of input X?
- Handle precomputed distances matrix, with (metric/affinity='precomputed'): TSNE, DBSCAN, SpectralEmbedding, SpectralClustering
- Handle `KNeighborsMixin` object: kneighbors_graph
- Handle `NeighborsBase` object: all estimators inheriting NeighborsBase + UnsupervisedMixin
- Handle `BallTree/KDTree` object: all estimators inheriting NeighborsBase + SupervisedFloatMixin/SupervisedIntegerMixin
___
### Issues:
1. We don't have all NN parameters in all classes (e.g. `n_jobs` in TSNE).
2. We can't give a custom NN estimators to these classes. (PR #3922 #8999)
3. The handle of input X as a `NearestNeighbors/BallTree/KDTree` object is not consistent, and not well documented. Sometimes it is documented but does not work (e.g. Isomap), or not well documented but it does work (e.g. LocalOutlierFactor). Most classes almost handle it since `NearestNeighbors().fit(NearestNeighbors().fit(X))` works, but a call to `check_array(X)` prevents it (e.g. Isomap, DBSCAN, SpectralEmbedding, SpectralClustering, LocallyLinearEmbedding, TSNE).
4. The handle of X as a precomputed distances matrix is not consistent, and sometimes does not work with sparse matrices (as given by `kneighbors_graph`) (e.g. TSNE #9691).
### Proposed solutions:
A. We could generalize the use of precomputed distances matrix, and use pipelines to chain `NearestNeighbors` with other estimators. Yet it might not be possible/efficient for some estimators. I this case one would have to adapt the estimators to allow for the following: `Estimator(neighbors='precomputed').fit(distance_matrix, y)`
B. We could improve the checking of X to enable more widely having X as a `NearestNeighbors/BallTree/KDTree` fitted object. The changes would be probably limited, however, this solution is not pipeline-friendly.
C. To be pipeline-friendly, a custom `NearestNeighbors` object could be passed in the params, unfitted. We could then put all NN-related parameters in this estimator parameter, and allow custom estimators with a clear API. This is essentially what is proposed in #8999. | null | https://github.com/scikit-learn/scikit-learn/pull/10482 | null | {'base_commit': 'c56bce482db698c7c7e7b583b8b2e08a211eb48b', 'files': [{'path': 'doc/glossary.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [699]}}}, {'path': 'doc/modules/classes.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1236, 1239]}}}, {'path': 'doc/modules/neighbors.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [511]}}}, {'path': 'doc/whats_new/v0.22.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [71, 316, 399]}}}, {'path': 'sklearn/cluster/dbscan_.py', 'status': 'modified', 'Loc': {"(None, 'dbscan', 23)": {'mod': [54, 55]}, "('DBSCAN', None, 147)": {'mod': [175, 176]}, "('DBSCAN', 'fit', 284)": {'mod': [322, 323, 331, 332, 333, 334, 336, 337, 338, 339, 340, 341, 342, 343, 344, 345, 346, 347]}}}, {'path': 'sklearn/cluster/spectral.py', 'status': 'modified', 'Loc': {"('SpectralClustering', 'fit', 448)": {'add': [481], 'mod': [471]}, '(None, None, None)': {'mod': [16]}, "('SpectralClustering', None, 275)": {'mod': [329, 330, 331, 332]}, "('SpectralClustering', '_pairwise', 532)": {'mod': [533]}}}, {'path': 'sklearn/cluster/tests/test_dbscan.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [97]}}}, {'path': 'sklearn/cluster/tests/test_spectral.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19, 104]}}}, {'path': 'sklearn/manifold/_utils.pyx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [16, 17, 23, 27, 28, 30, 31, 32, 33, 49, 64, 65, 67, 68, 88, 97]}}}, {'path': 'sklearn/manifold/isomap.py', 'status': 'modified', 'Loc': {"('Isomap', None, 15)": {'add': [66, 140], 'mod': [61, 76, 77]}, "('Isomap', '__init__', 105)": {'add': [115], 'mod': [107]}, "('Isomap', '_fit_transform', 117)": {'add': [120, 130], 'mod': [118, 123]}, '(None, None, None)': {'mod': [9]}, "('Isomap', 'fit', 165)": {'mod': [170, 172]}, "('Isomap', 'fit_transform', 184)": {'mod': [189]}, "('Isomap', 'transform', 202)": {'mod': [215, 219, 221, 224, 225, 228, 229]}}}, {'path': 'sklearn/manifold/locally_linear.py', 'status': 'modified', 'Loc': {"(None, 'barycenter_kneighbors_graph', 67)": {'mod': [102]}}}, {'path': 'sklearn/manifold/spectral_embedding_.py', 'status': 'modified', 'Loc': {"('SpectralEmbedding', '_get_affinity_matrix', 458)": {'add': [479]}, '(None, None, None)': {'mod': [22]}, "(None, 'spectral_embedding', 135)": {'mod': [160]}, "('SpectralEmbedding', None, 353)": {'mod': [372, 373, 374]}, "('SpectralEmbedding', '_pairwise', 455)": {'mod': [456]}, "('SpectralEmbedding', 'fit', 505)": {'mod': [510, 515, 525, 529, 530]}, "('SpectralEmbedding', 'fit_transform', 545)": {'mod': [550, 555]}}}, {'path': 'sklearn/manifold/t_sne.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21], 'mod': [14, 17]}, "('TSNE', '_fit', 640)": {'add': [666], 'mod': [641, 643, 644, 645, 646, 648, 649, 650, 651, 652, 653, 654, 655, 656, 658, 659, 660, 661, 662, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686, 687, 688, 689, 733, 737, 740, 743, 753, 754, 757, 758, 769, 772, 773]}, "(None, '_joint_probabilities', 31)": {'mod': [56]}, "(None, '_joint_probabilities_nn', 63)": {'mod': [63, 73, 74, 76, 77, 93, 94, 95, 97, 102, 103]}, "('TSNE', 'fit_transform', 864)": {'mod': [872]}, "('TSNE', 'fit', 885)": {'mod': [894]}}}, {'path': 'sklearn/manifold/tests/test_isomap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 116]}}}, {'path': 'sklearn/manifold/tests/test_spectral_embedding.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}, "(None, 'test_spectral_embedding_precomputed_affinity', 128)": {'mod': [128, 136, 137]}, "(None, 'test_spectral_embedding_callable_affinity', 143)": {'mod': [143, 155, 156]}}}, {'path': 'sklearn/manifold/tests/test_t_sne.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10, 321], 'mod': [9]}, "(None, 'test_binary_search', 104)": {'mod': [107, 108, 109, 110, 112, 113]}, "(None, 'test_binary_search_neighbors', 120)": {'mod': [127, 128, 129, 130, 131, 132, 135, 136, 137, 138, 139, 140, 141, 142, 143, 145, 146, 148, 149, 150, 151, 152, 153, 154]}, "(None, 'test_binary_perplexity_stability', 162)": {'mod': [166, 169, 170, 171, 172, 174, 175, 177, 178, 179]}, "(None, 'test_fit_csr_matrix', 265)": {'mod': [265, 272]}, "(None, 'test_non_square_precomputed_distances', 316)": {'mod': [316, 317, 319, 320]}, "(None, 'test_non_positive_precomputed_distances', 323)": {'mod': [323, 324, 325, 326, 327, 328, 329]}, "(None, 'test_no_sparse_on_barnes_hut', 566)": {'mod': [566, 567, 568, 569, 570, 571, 572, 573, 574]}, "(None, 'test_barnes_hut_angle', 609)": {'mod': [619, 620, 621, 622, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637]}}}, {'path': 'sklearn/neighbors/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 23, 27]}}}, {'path': 'sklearn/neighbors/base.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [105], 'mod': [29]}, "('NeighborsBase', '_fit', 164)": {'add': [194, 200, 206, 235, 239], 'mod': [209]}, "('KNeighborsMixin', 'kneighbors', 339)": {'add': [429, 483], 'mod': [345, 346, 360, 364, 409, 417, 418, 422, 424, 425, 428, 435, 436, 438, 459, 467, 468, 469, 470, 471, 474, 480, 482, 494, 497, 498, 499]}, "('KNeighborsMixin', 'kneighbors_graph', 502)": {'add': [564, 575], 'mod': [508, 509, 525, 550, 551, 552, 553, 554, 555, 557, 558, 559, 563, 577]}, "('RadiusNeighborsMixin', 'radius_neighbors_graph', 787)": {'add': [808], 'mod': [795, 811, 832, 833, 835, 846, 853, 862]}, "(None, '_tree_query_parallel_helper', 292)": {'mod': [292, 298]}, "(None, '_tree_query_radius_parallel_helper', 582)": {'mod': [582, 588]}, "('RadiusNeighborsMixin', None, 591)": {'mod': [628, 787]}, "('RadiusNeighborsMixin', 'radius_neighbors', 628)": {'mod': [650, 654, 659, 698, 706, 718, 723, 724, 727, 728, 729, 732, 734, 753, 754, 758, 759, 761, 772, 781, 784]}}}, {'path': 'sklearn/neighbors/classification.py', 'status': 'modified', 'Loc': {"('KNeighborsClassifier', None, 26)": {'add': [76]}, "('RadiusNeighborsClassifier', None, 252)": {'add': [305]}, "('KNeighborsClassifier', 'predict', 155)": {'mod': [160, 161, 166, 179, 182]}, "('KNeighborsClassifier', 'predict_proba', 197)": {'mod': [202, 203, 208, 223, 233]}, "('RadiusNeighborsClassifier', 'predict', 446)": {'mod': [451, 452, 457, 469, 470, 471]}, "('RadiusNeighborsClassifier', 'predict_proba', 489)": {'mod': [494, 495, 500, 507, 510, 538]}}}, {'path': 'sklearn/neighbors/graph.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 7, 8]}, "(None, 'radius_neighbors_graph', 108)": {'add': [184], 'mod': [146, 148, 149, 159, 183]}, "(None, '_query_include_self', 24)": {'mod': [24, 26, 27, 28, 29, 31]}, "(None, 'kneighbors_graph', 34)": {'mod': [68, 70, 71, 81, 104]}}}, {'path': 'sklearn/neighbors/lof.py', 'status': 'modified', 'Loc': {"('LocalOutlierFactor', None, 19)": {'mod': [63, 64, 121]}, "('LocalOutlierFactor', 'fit', 219)": {'mod': [242, 250, 251]}, "('LocalOutlierFactor', '_predict', 299)": {'mod': [323]}, "('LocalOutlierFactor', '_local_reachability_density', 470)": {'mod': [478, 482, 488]}}}, {'path': 'sklearn/neighbors/regression.py', 'status': 'modified', 'Loc': {"('KNeighborsRegressor', None, 24)": {'add': [80]}, "('RadiusNeighborsRegressor', None, 194)": {'add': [251]}, '(None, None, None)': {'mod': [16]}, "('KNeighborsRegressor', 'predict', 149)": {'mod': [154, 155, 160, 163, 164, 165, 166, 167]}, "('RadiusNeighborsRegressor', 'predict', 313)": {'mod': [318, 319, 324]}}}, {'path': 'sklearn/neighbors/tests/test_neighbors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 10, 11, 16, 190, 823], 'mod': [7]}, "(None, 'test_k_and_radius_neighbors_duplicates', 1297)": {'add': [1320]}, "(None, 'test_radius_neighbors_predict_proba', 1485)": {'add': [1500]}, "(None, 'test_precomputed', 136)": {'mod': [136, 139, 142, 143, 144, 178, 179, 180, 181, 182]}, "(None, 'test_kneighbors_regressor_sparse', 824)": {'mod': [849, 850, 851, 852]}}}, {'path': 'sklearn/neighbors/unsupervised.py', 'status': 'modified', 'Loc': {"('NearestNeighbors', None, 9)": {'mod': [43, 44, 46, 47, 48, 49, 50, 52, 54, 56, 57, 59, 60, 61, 62, 63, 65, 66]}}}, {'path': 'sklearn/utils/estimator_checks.py', 'status': 'modified', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"sklearn/neighbors/unsupervised.py",
"sklearn/neighbors/regression.py",
"sklearn/manifold/t_sne.py",
"sklearn/neighbors/__init__.py",
"sklearn/neighbors/base.py",
"sklearn/manifold/locally_linear.py",
"sklearn/manifold/_utils.pyx",
"sklearn/cluster/dbscan_.py",
"sklearn/manifold/spectral_embedding_.py",
"sklearn/cluster/spectral.py",
"sklearn/manifold/isomap.py",
"sklearn/neighbors/lof.py",
"sklearn/neighbors/classification.py",
"sklearn/neighbors/graph.py",
"sklearn/utils/estimator_checks.py"
],
"doc": [
"doc/modules/neighbors.rst",
"doc/glossary.rst",
"doc/modules/classes.rst",
"doc/whats_new/v0.22.rst"
],
"test": [
"sklearn/manifold/tests/test_spectral_embedding.py",
"sklearn/neighbors/tests/test_neighbors.py",
"sklearn/cluster/tests/test_spectral.py",
"sklearn/cluster/tests/test_dbscan.py",
"sklearn/manifold/tests/test_isomap.py",
"sklearn/manifold/tests/test_t_sne.py"
],
"config": [],
"asset": []
} | 1 |
python | cpython | 55d50d147c953fab37b273bca9ab010f40e067d3 | https://github.com/python/cpython/issues/102500 | type-feature
topic-typing
3.12 | Implement PEP 688: Making the buffer protocol accessible in Python | PEP-688 has just been accepted. I will use this issue to track its implementation in CPython.
<!-- gh-linked-prs -->
### Linked PRs
* gh-102521
* gh-102571
* gh-104174
* gh-104281
* gh-104288
* gh-104317
<!-- /gh-linked-prs -->
| null | https://github.com/python/cpython/pull/102521 | null | {'base_commit': '55d50d147c953fab37b273bca9ab010f40e067d3', 'files': [{'path': 'Include/internal/pycore_global_objects_fini_generated.h', 'status': 'modified', 'Loc': {"(None, '_PyStaticObjects_CheckRefcnt', 24)": {'add': [595, 694, 1124]}}}, {'path': 'Include/internal/pycore_global_strings.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [83, 182, 612]}}}, {'path': 'Include/internal/pycore_runtime_init_generated.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [589, 688, 1118]}}}, {'path': 'Include/internal/pycore_typeobject.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [140]}}}, {'path': 'Include/internal/pycore_unicodeobject_generated.h', 'status': 'modified', 'Loc': {"(None, '_PyUnicode_InitStaticStrings', 12)": {'add': [98, 395, 1685]}}}, {'path': 'Include/pybuffer.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [107]}}}, {'path': 'Lib/_collections_abc.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [441], 'mod': [52]}}}, {'path': 'Lib/inspect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [45, 3314]}}}, {'path': 'Lib/test/test_buffer.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19, 4440]}}}, {'path': 'Lib/test/test_collections.py', 'status': 'modified', 'Loc': {"('TestCollectionABCs', None, 1416)": {'add': [1951]}, '(None, None, None)': {'mod': [28]}}}, {'path': 'Lib/test/test_doctest.py', 'status': 'modified', 'Loc': {"('test_DocTestFinder', 'non_Python_modules', 700)": {'mod': [710]}}}, {'path': 'Modules/Setup.stdlib.in', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [172]}}}, {'path': 'Modules/_testcapi/parts.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [40]}}}, {'path': 'Modules/_testcapimodule.c', 'status': 'modified', 'Loc': {"(None, 'PyInit__testcapi', 4162)": {'add': [4312]}}}, {'path': 'Objects/clinic/memoryobject.c.h', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [64], 'mod': [359]}}}, {'path': 'Objects/memoryobject.c', 'status': 'modified', 'Loc': {'(None, None, 783)': {'add': [807], 'mod': [795]}, '(None, None, None)': {'add': [970, 3186], 'mod': [780]}, "(None, '_PyManagedBuffer_FromObject', 88)": {'mod': [88]}, '(None, None, 87)': {'mod': [96]}, "(None, 'PyMemoryView_FromObject', 784)": {'mod': [784]}, '(None, None, 838)': {'mod': [854]}}}, {'path': 'Objects/object.c', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16, 2075]}}}, {'path': 'Objects/typeobject.c', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 8061, 8897, 8964, 8983, 9064]}, '(None, None, 9203)': {'mod': [9211, 9212]}}}, {'path': 'PCbuild/_testcapi.vcxproj', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [112]}}}, {'path': 'PCbuild/_testcapi.vcxproj.filters', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [62]}}}, {'path': 'Tools/build/generate_global_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [123]}}}, {'path': 'Tools/c-analyzer/cpython/globals-to-fix.tsv', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [88]}}}, {'path': 'Tools/c-analyzer/cpython/ignored.tsv', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [406]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": "",
"info_type": ""
} | {
"code": [
"Objects/typeobject.c",
"Lib/inspect.py",
"Tools/build/generate_global_objects.py",
"Include/internal/pycore_global_objects_fini_generated.h",
"Tools/c-analyzer/cpython/ignored.tsv",
"Objects/clinic/memoryobject.c.h",
"Lib/_collections_abc.py",
"Tools/c-analyzer/cpython/globals-to-fix.tsv",
"Include/internal/pycore_typeobject.h",
"Include/internal/pycore_runtime_init_generated.h",
"Modules/_testcapimodule.c",
"Objects/memoryobject.c",
"Modules/_testcapi/parts.h",
"Include/pybuffer.h",
"Include/internal/pycore_global_strings.h",
"Include/internal/pycore_unicodeobject_generated.h",
"Objects/object.c"
],
"doc": [],
"test": [
"Lib/test/test_doctest.py",
"Lib/test/test_buffer.py",
"Lib/test/test_collections.py"
],
"config": [],
"asset": [
"PCbuild/_testcapi.vcxproj.filters",
"PCbuild/_testcapi.vcxproj",
"Modules/Setup.stdlib.in"
]
} | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.