organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
scrapy | scrapy | 794ab19660d369f273abdd5b93721c209f6e4eab | https://github.com/scrapy/scrapy/issues/4528 | enhancement | Fail or warn if from_crawler() returns None | ## Summary
Generate a warning or error if from_crawler() for a middleware/extension/etc. returns None
## Motivation
I created a custom extension and connected signals in the from_crawler() classmethod, but neglected to return the new extension instance. Scrapy still reported the extension under "Enabled extensions", but none of the signals worked, since the instance was immediately garbage collected and its signals were silently disconnected.
This was of course an error on my part, but it would have saved me a lot of debugging if I had gotten a warning that from_crawler() was returning None, or if the extension were removed from the "Enabled extensions" list.
Would it be appropriate for utils.misc.create_instance() to raise an error or generate a warning if it's about to return None? Or should MiddlewareManager treat create_instance() returning None the same as create_instance() raising NotConfigured? | null | https://github.com/scrapy/scrapy/pull/4532 | null | {'base_commit': '794ab19660d369f273abdd5b93721c209f6e4eab', 'files': [{'path': 'scrapy/utils/misc.py', 'status': 'modified', 'Loc': {"(None, 'create_instance', 128)": {'add': [139], 'mod': [146, 148, 150]}}}, {'path': 'tests/test_utils_misc/__init__.py', 'status': 'modified', 'Loc': {"('UtilsMiscTestCase', None, 13)": {'add': [133]}, "('UtilsMiscTestCase', 'test_create_instance', 80)": {'mod': [117, 118, 126]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/utils/misc.py",
"tests/test_utils_misc/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | c9d7386a32aeb4bc7fe9654d194651eee1ede56c | https://github.com/scrapy/scrapy/issues/908 | Current caching of DjangoItem's instance property is buggy | The current implementation of `DjangoItem`'s `instance` should be rewritten in order to reflect further modifications of the item to the underlying django's model instance.
These are the steps to reproduce the issue:
create a django item:
`item = MyDjangoItem()`
set a field value:
`item['foo'] = 1`
save the item:
`model = item.save(commit=False)`
item.instance is now cached and further modifications to the item are not reflected...
this returns "1"... and it's ok, because has been previously assigned to the instance
`print model.foo`
now... set a new item field
`item['bar'] = 2`
this prints None (because the underlying cache has not been updated!!)
`print item.instance.bar`
In conclusion... the cache should be purged each times the item is updated!
Currently I overrided the DjangoItem in this way to avoid the issue:
```
from scrapy.contrib.djangoitem import DjangoItem as BaseDjangoItem
class DjangoItem(BaseDjangoItem):
def __setitem__(self, key, value):
self._instance = None
return super(DjangoItem, self).__setitem__(key, value)
def __delitem__(self, key):
self._instance = None
super(DjangoItem, self).__delitem__(key)
```
| null | https://github.com/scrapy/scrapy/pull/1065 | null | {'base_commit': 'c9d7386a32aeb4bc7fe9654d194651eee1ede56c', 'files': [{'path': 'scrapy/contrib/djangoitem.py', 'status': 'modified', 'Loc': {"('DjangoItem', '__init__', 30)": {'add': [33], 'mod': [31]}}}, {'path': 'tests/test_djangoitem/__init__.py', 'status': 'modified', 'Loc': {"('DjangoItemTest', 'test_default_field_values', 100)": {'add': [103]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"tests/test_djangoitem/__init__.py",
"scrapy/contrib/djangoitem.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | 776129a9513e2b6ab6f7e8cda1dd3de66cbbff44 | https://github.com/scrapy/scrapy/issues/2452 | bug | Image Background converting to green. | Hi,
Problem is I'm downloading images with crawler but they are transparency. So image is looking like this.

But it should look like this.
##

| null | https://github.com/scrapy/scrapy/pull/2675 | null | {'base_commit': '776129a9513e2b6ab6f7e8cda1dd3de66cbbff44', 'files': [{'path': 'scrapy/pipelines/images.py', 'status': 'modified', 'Loc': {"('ImagesPipeline', 'convert_image', 130)": {'add': [134]}}}, {'path': 'tests/test_pipeline_images.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [98]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/pipelines/images.py"
],
"doc": [],
"test": [
"tests/test_pipeline_images.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | d874c4d90bcf96c7e5b507babaa2a45a233da506 | https://github.com/scrapy/scrapy/issues/4124 | bug
docs | Fix wrong fact in JOBDIR documentation about requests needing to be pickle-serializable | The documentation about using `JODBIR` says that requests need to be serializable with `pickle`.
But, thanks to feedback from @kmike, now I know that their callback and errback methods do not need to be `pickle`-serializable as long as they are spider methods.
The documentation should be clear about this.
Related to #4125. | null | https://github.com/scrapy/scrapy/pull/4139 | null | {'base_commit': 'd874c4d90bcf96c7e5b507babaa2a45a233da506', 'files': [{'path': 'docs/topics/jobs.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74, 75, 77, 78, 80, 82, 83, 84, 85, 87, 88, 90, 92, 93, 94, 95, 97, 98, 104]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/topics/jobs.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 62a4ede5e995f83abd5a90f7dd6ac242f2f3870d | https://github.com/scrapy/scrapy/issues/4250 | enhancement | Batch deliveries for long running crawlers | ## Summary
Add a new setting `FEED_STORAGE_BATCH` that will deliver a file whenever `item_scraped_count` reaches a multiple of that number.
## Motivation
For long running jobs (say we are consuming inputs from a working queue) we may want partial results instead of waiting for a long batch to finish.
## Describe alternatives you've considered
Of course we can stop and restart a spider every now and then.
However, a simpler approach is to have it running as long as required, but delivering partial results.
| null | https://github.com/scrapy/scrapy/pull/4434 | null | {'base_commit': '62a4ede5e995f83abd5a90f7dd6ac242f2f3870d', 'files': [{'path': 'docs/topics/feed-exports.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [243, 294, 448]}}}, {'path': 'scrapy/extensions/feedexport.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}, "('_FeedSlot', '__init__', 209)": {'add': [216], 'mod': [214]}, "('FeedExporter', '__init__', 242)": {'add': [272]}, "('FeedExporter', None, 232)": {'add': [324, 325, 345], 'mod': [371]}, "('FeedExporter', 'item_scraped', 325)": {'add': [329]}, "('_FeedSlot', None, 208)": {'mod': [209]}, "('FeedExporter', 'open_spider', 276)": {'mod': [278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, 290, 291]}, "('FeedExporter', 'close_spider', 293)": {'mod': [296, 297, 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 309, 310, 311, 312, 313, 314, 315, 316, 317, 318, 319, 320, 321]}, "('FeedExporter', '_get_uri_params', 371)": {'mod': [375, 376]}}}, {'path': 'scrapy/settings/default_settings.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [149]}}}, {'path': 'scrapy/utils/conf.py', 'status': 'modified', 'Loc': {"(None, 'feed_complete_default_values_from_settings', 116)": {'add': [117]}}}, {'path': 'tests/test_feedexport.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 26, 29]}, "('FeedExportTest', None, 505)": {'add': [511], 'mod': [505, 518, 519, 520, 521, 562, 563, 564, 565, 566, 567, 568, 570, 571, 572, 574, 575, 577, 578, 579, 580, 581, 582, 583, 585, 586, 588, 589, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 696, 697, 698, 699, 700, 701, 702, 703]}, "('FeedExportTest', 'test_multiple_feeds_failing_logs_blocking_feed_storage', 1147)": {'add': [1165]}, "('FeedExportTest', 'test_export_no_items_not_store_empty', 720)": {'mod': [728]}, "('FeedExportTest', 'test_export_no_items_store_empty', 731)": {'mod': [748]}}}, {'path': 'tests/test_utils_conf.py', 'status': 'modified', 'Loc': {"('FeedExportConfigTestCase', 'test_feed_complete_default_values_from_settings_empty', 144)": {'add': [151, 159]}, "('FeedExportConfigTestCase', 'test_feed_complete_default_values_from_settings_non_empty', 162)": {'add': [171, 179]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/utils/conf.py",
"scrapy/extensions/feedexport.py",
"scrapy/settings/default_settings.py"
],
"doc": [
"docs/topics/feed-exports.rst"
],
"test": [
"tests/test_utils_conf.py",
"tests/test_feedexport.py"
],
"config": [
"tox.ini"
],
"asset": []
} | 1 |
scrapy | scrapy | 016c7e92d1d2893e7a8ce61c7f2e76818e71d019 | https://github.com/scrapy/scrapy/issues/5161 | Feeds Enhancement: Item Filters | <!--
Thanks for taking an interest in Scrapy!
If you have a question that starts with "How to...", please see the Scrapy Community page: https://scrapy.org/community/.
The GitHub issue tracker's purpose is to deal with bug reports and feature requests for the project itself.
Keep in mind that by filing an issue, you are expected to comply with Scrapy's Code of Conduct, including treating everyone with respect: https://github.com/scrapy/scrapy/blob/master/CODE_OF_CONDUCT.md
The following is a suggested template to structure your pull request, you can find more guidelines at https://doc.scrapy.org/en/latest/contributing.html#writing-patches and https://doc.scrapy.org/en/latest/contributing.html#submitting-patches
-->
## Summary
Currently there are no convenient ways to filter items before they can be exported. An ```ItemChecker``` class can be used to filter items while also providing flexibility to the user.
## Motivation/Proposal
Scrapy currently doesn't have any convenient APIs to customize conditions for item exports. An ```ItemChecker``` class can be used by the user to define constraints for acceptable items for particular feeds.
The ```ItemChecker``` class can have 3 main public methods ```accepts```, ```accepts_class``` and ```accepts_fields```. Scrapy will mainly use ```accepts``` method to decide if an item is acceptable, ```accepts_class``` and ```accepts_fields``` will have certain default behaviors which can be overriden by the user should they want to customize them.
```python
class ItemChecker:
"""
This will be used by FeedExporter to decide if an item should be allowed
to be exported to a particular feed.
:param feed_options: FEEDS dictionary passed from FeedExporter
:type feed_options: dict
"""
accepted_items = [] # list of Items user wants to accept
def __init__(self, feed_options):
# populate accepted_items with item_classes values from feed_options if present
def accepts(self, item):
"""
Main method to be used by FeedExporter to check if the item is acceptable according
to defined constraints. This method uses accepts_class and accept_fields method
to decide if the item is acceptable.
:param item: scraped item which user wants to check if is acceptable
:type item: scrapy supported items (dictionaries, Item objects, dataclass objects, and attrs objects)
:return: `True` if accepted, `False` otherwise
:rtype: bool
"""
def accepts_class(self, item):
"""
Method to check if the item is an instance of a class declared in accepted_items
list. Can be overriden by user if they want allow certain item classes.
Default behaviour: if accepted_items is empty then all items will be
accepted else only items present in accepted_items will be accepted.
:param item: scraped item
:type item: scrapy supported items (dictionaries, Item objects, dataclass objects, and attrs objects)
:return: `True` if item in accepted_items, `False` otherwise
:rtype: bool
"""
def accepts_fields(self, fields):
"""
Method to check if certain fields of the item passes the filtering
criteria. Users can override this method to add their own custom
filters.
Default behaviour: accepts all fields.
:param fields: all the fields of the scraped item
:type fields: dict
:return: `True` if all the fields passes the filtering criteria, else `False`
:rtype: bool
"""
```
Such custom filters can be declared in ```settings.py```. For convenience Items can also be declared here without needing to create a custom ```ItemChecker``` class.
```python
from myproject.filterfile import MyFilter1
from myproject.items import MyItem1, MyItem2
FEEDS = {
'items1.json': {
'format': 'json',
'item_filter': MyFilter1,
},
'items2.xml': {
'format': 'xml',
'item_classes': (MyItem1, MyItem2),
},
}
```
## Describe alternatives you've considered
This feature builds upon #4576.
## Additional context
This feature proposal is part of a GSoC project (see #4963). This issue has been created to get inputs from the Scrapy community to refine the proposed feature.
| null | https://github.com/scrapy/scrapy/pull/5178 | null | {'base_commit': '016c7e92d1d2893e7a8ce61c7f2e76818e71d019', 'files': [{'path': 'docs/topics/feed-exports.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [270, 313, 322, 328, 349]}}}, {'path': 'scrapy/extensions/feedexport.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [48]}, "('_FeedSlot', '__init__', 218)": {'add': [227]}, "('FeedExporter', '__init__', 253)": {'add': [257, 271, 277]}, "('FeedExporter', '_start_new_batch', 342)": {'add': [370]}, "('FeedExporter', 'item_scraped', 376)": {'add': [378]}, "('FeedExporter', '_get_uri_params', 478)": {'add': [488]}, "('_FeedSlot', None, 217)": {'mod': [218]}}}, {'path': 'tests/test_feedexport.py', 'status': 'modified', 'Loc': {"('FeedExportTestBase', None, 556)": {'add': [563]}, "('FeedExportTest', None, 632)": {'add': [931]}, "('FeedExportTest', 'test_export_multiple_item_classes', 889)": {'mod': [891, 892, 893, 897]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/extensions/feedexport.py"
],
"doc": [
"docs/topics/feed-exports.rst"
],
"test": [
"tests/test_feedexport.py"
],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | 1445ebd2294cd3d1d8886649fec969bfe78979ad | https://github.com/scrapy/scrapy/issues/5655 | Add twine check to CI | It'd be nice to do https://twine.readthedocs.io/en/stable/#twine-check on CI, to ensure our changes don't break pypi rendering. | null | https://github.com/scrapy/scrapy/pull/5656 | null | {'base_commit': '1445ebd2294cd3d1d8886649fec969bfe78979ad', 'files': [{'path': '.github/workflows/checks.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [73]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"tox.ini",
".github/workflows/checks.yml"
],
"asset": []
} | 1 | |
scrapy | scrapy | 3a4d57b3f52633d77291e51f31353fd317034d8c | https://github.com/scrapy/scrapy/issues/1223 | FeedExporters export empty dicts when FEED_EXPORT_FIELDS setting is not set | Reported by several users using S3 exporter and JsonItemExporter.
It looks like the docs for [FEED_EXPORT_FIELDS](http://doc.scrapy.org/en/master/topics/feed-exports.html?#std:setting-FEED_EXPORT_FIELDS) do not match current behaviour:
> When omitted, Scrapy uses fields defined in Item subclasses a spider is yielding. If raw dicts are used as items Scrapy tries to infer field names from the exported data - currently it uses field names from the first item.
```
if self.fields_to_export is None:
if include_empty and not isinstance(item, dict):
field_iter = six.iterkeys(item.fields)
else:
field_iter = six.iterkeys(item)
```
https://github.com/scrapy/scrapy/blob/master/scrapy/exporters/__init__.py#L59
This following line fetching settings for FEED_EXPORT_FIELDS returns an empty list `[]` when setting is absent, and not `None` as one would expect (a bug in `settings.getlist()` IMO)
```
self.export_fields = settings.getlist('FEED_EXPORT_FIELDS')
```
https://github.com/scrapy/scrapy/blob/master/scrapy/extensions/feedexport.py#L154
| null | https://github.com/scrapy/scrapy/pull/1224 | null | {'base_commit': '3a4d57b3f52633d77291e51f31353fd317034d8c', 'files': [{'path': 'docs/topics/feed-exports.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [238], 'mod': [244, 245, 246]}}}, {'path': 'scrapy/extensions/feedexport.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}, "('FeedExporter', '__init__', 142)": {'mod': [155]}}}, {'path': 'tests/test_feedexport.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "('FeedExportTest', None, 120)": {'mod': [183, 197, 233, 247]}, "('FeedExportTest', 'test_export_csv_items', 183)": {'mod': [194]}, "('FeedExportTest', 'test_export_csv_multiple_item_classes', 197)": {'mod': [210, 212, 218, 220, 229, 230]}, "('FeedExportTest', 'test_export_csv_dicts', 233)": {'mod': [235, 240, 244]}, "('FeedExportTest', 'test_export_csv_feed_export_fields', 247)": {'mod': [263, 264, 272, 273]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/extensions/feedexport.py"
],
"doc": [
"docs/topics/feed-exports.rst"
],
"test": [
"tests/test_feedexport.py"
],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | e1f66620ec7341c55f3eb7f44088224b5f68c1ad | https://github.com/scrapy/scrapy/issues/5855 | good first issue
CI | test_batch_path_differ sometimes fails | See https://github.com/scrapy/scrapy/pull/5847#issuecomment-1471778039. | null | https://github.com/scrapy/scrapy/pull/5898 | null | {'base_commit': 'e1f66620ec7341c55f3eb7f44088224b5f68c1ad', 'files': [{'path': 'tests/test_feedexport.py', 'status': 'modified', 'Loc': {"('BatchDeliveriesTest', 'test_batch_path_differ', 2542)": {'mod': [2545, 2555]}, "('BatchDeliveriesTest', 'test_s3_export', 2587)": {'mod': [2618]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [
"tests/test_feedexport.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 0f4b70f5821b4db2882ad4f01d340f62bbb01bf7 | https://github.com/scrapy/scrapy/issues/10 | enhancement | Add support for FTP downloads | We should add support for following FTP links like:
ftp://www.example.com/somedir/somefile.xml
I suppose Requests will only use the URL attribute (and perhaps some data in meta, if it's needed).
As for Responses, they will contain the file contents in the body, as one would expect.
here should be a flag to enable/disable passive FTP, perhaps even per spider.
| null | https://github.com/scrapy/scrapy/pull/329 | null | {'base_commit': '0f4b70f5821b4db2882ad4f01d340f62bbb01bf7', 'files': [{'path': 'scrapy/settings/default_settings.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [58]}}}, {'path': 'scrapy/tests/test_downloader_handlers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 18]}, "('S3TestCase', 'test_request_signing6', 309)": {'add': [328]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/settings/default_settings.py"
],
"doc": [],
"test": [
"scrapy/tests/test_downloader_handlers.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | c7654f7cb1081f0937f84c1b2ed272318c9c2c6c | https://github.com/scrapy/scrapy/issues/2435 | enhancement | Exposing downloader stats to custom scheduler | In order to get maximum fetching performance, the queue have to be carefully metered. In order to do this the custom scheduler needs to know:
- the type of the key in downloader (ip or hostname),
- count of requests to specific hostname/ip in the queue,
- delay/concurrency parameters of hostname/ip,
- list of all hostname/ips in the queue.
Current Scheduler API is designed for storage and resume-from-disk purposes, so I think it's time to re-think it taking into account efficiency of fetching. The most common problem with inefficient fetching is a queue filled with a single domain and polite crawling requirement. | null | https://github.com/scrapy/scrapy/pull/3393 | null | {'base_commit': 'c7654f7cb1081f0937f84c1b2ed272318c9c2c6c', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}}}, {'path': 'docs/topics/signals.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [281]}}}, {'path': 'scrapy/core/downloader/__init__.py', 'status': 'modified', 'Loc': {"('Downloader', '_enqueue_request', 123)": {'add': [131]}}}, {'path': 'scrapy/signals.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15]}}}, {'path': 'tests/test_engine.py', 'status': 'modified', 'Loc': {"('CrawlerRun', '__init__', 101)": {'add': [105]}, "('CrawlerRun', 'run', 111)": {'add': [126]}, "('CrawlerRun', None, 98)": {'add': [157]}, "('EngineTest', '_assert_scheduled_requests', 202)": {'add': [214]}, "('EngineTest', '_assert_downloaded_responses', 219)": {'add': [221]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/core/downloader/__init__.py",
"scrapy/signals.py"
],
"doc": [
"docs/topics/signals.rst"
],
"test": [
"tests/test_engine.py"
],
"config": [
".gitignore"
],
"asset": []
} | 1 |
scrapy | scrapy | 794ab19660d369f273abdd5b93721c209f6e4eab | https://github.com/scrapy/scrapy/issues/4556 | enhancement
good first issue
docs | Cover chompjs in the documentation | We cover js2xml in the documentation. However, the library can be rather slow. For use cases where https://github.com/Nykakin/chompjs may be used instead, it should be encouraged. | null | https://github.com/scrapy/scrapy/pull/4562 | null | {'base_commit': '794ab19660d369f273abdd5b93721c209f6e4eab', 'files': [{'path': 'docs/topics/dynamic-content.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [186, 243]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/topics/dynamic-content.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 6e49c379a8ecfe92c99a37b6bb6d7e440df56bd9 | https://github.com/scrapy/scrapy/issues/3171 | enhancement
discuss | Log with error level instead of debug when reaching max retry times | There’s no easy/not hackish way to log an error when reaching max retry times with standard scrapy `RetryMiddleware`. It’s useful for me to be able to see right away if a page I tried to crawl has not been downloaded.
I think it’s sensible to change this line to log to error level instead:
https://github.com/scrapy/scrapy/blob/6cc6bbb5fc5c102271829a554772effb0444023c/scrapy/downloadermiddlewares/retry.py#L89 | null | https://github.com/scrapy/scrapy/pull/3566 | null | {'base_commit': '6e49c379a8ecfe92c99a37b6bb6d7e440df56bd9', 'files': [{'path': 'scrapy/downloadermiddlewares/retry.py', 'status': 'modified', 'Loc': {"('RetryMiddleware', '_retry', 61)": {'mod': [87]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/downloadermiddlewares/retry.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 7ae32ea38d9b78402528ac3dffc8e1c5f1cf86b7 | https://github.com/scrapy/scrapy/issues/5774 | enhancement
CI | Deprecate direct invocation of `setup.py` | I was reading this article: [Why you shouldn't invoke setup.py directly](https://blog.ganssle.io/articles/2021/10/setup-py-deprecated.html) that explains why this is not a good practice anymore. I thought about discussing this. Should we change to new approach?
We have only a few places that we directly invoke `setup.py` so should be an easy task to replace it.
https://github.com/scrapy/scrapy/search?q=setup.py | null | https://github.com/scrapy/scrapy/pull/5776 | null | {'base_commit': '7ae32ea38d9b78402528ac3dffc8e1c5f1cf86b7', 'files': [{'path': '.github/workflows/publish.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [27, 28]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [77], 'mod': [79]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"tox.ini",
".github/workflows/publish.yml"
],
"asset": []
} | 1 |
scrapy | scrapy | c57512fa669e6f6b1b766a7639206a380f0d10ce | https://github.com/scrapy/scrapy/issues/50 | enhancement
discuss
patch available | Offsite middleware ignoring port | In my spider I have the following:
class MySpider(BaseSpider):
``` python
allowed_domains = ['192.169.0.15:8080']
```
and in the parse method I do something like:
``` python
yield Request('http://192.169.0.15:8080/mypage.html', self.my_callback_function)
```
the result when I run the code is that that scrapy reports:
DEBUG: Filtered offsite request to '192.168.0.15': <GET http://192.168.0.15:8080/mypage.html>
Which is wrong - it seems to be ignoring the port. If I change the allowed_domains to:
``` python
allowed_domains = ['192.169.0.15:8080', '192.16.0.15']
```
Then it works as you would expect it to. No big deal, can work around it but I think it is a bug. The problem being located in the should_follow method of the OffsiteMiddleware class in contrib/spidermiddleware/offsite.py
| null | https://github.com/scrapy/scrapy/pull/4413 | null | {'base_commit': 'c57512fa669e6f6b1b766a7639206a380f0d10ce', 'files': [{'path': 'scrapy/spidermiddlewares/offsite.py', 'status': 'modified', 'Loc': {"('URLWarning', None, 71)": {'add': [72]}, "('OffsiteMiddleware', 'get_host_regex', 51)": {'mod': [56, 58, 62]}}}, {'path': 'tests/test_spidermiddleware_offsite.py', 'status': 'modified', 'Loc': {"('TestOffsiteMiddleware5', 'test_get_host_regex', 77)": {'add': [82]}, '(None, None, None)': {'mod': [7]}, "('TestOffsiteMiddleware', 'test_process_spider_output', 22)": {'mod': [29]}, "('TestOffsiteMiddleware3', None, 56)": {'mod': [58, 59]}, "('TestOffsiteMiddleware4', None, 62)": {'mod': [64, 65, 66]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/spidermiddlewares/offsite.py"
],
"doc": [],
"test": [
"tests/test_spidermiddleware_offsite.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 845c64b89df765ff5c015632c082b6472e61b7d3 | https://github.com/scrapy/scrapy/issues/306 | --output-format raise on invalid format | _Using version 0.16.4_
Currently, if an invalid format is passed to the `-t` or `--output-format` options, the spider will proceed with its crawling operation, but no output will be saved or produced. This could be seen as frustrating on large scrape runs, in the event the user running it passed a mistyped format to the option and assumed their operation was saving output only to find later the scraped data has only been logged to stdout.
Should we make the output format option raise or fail+exit if an invalid or unknown format is passed?
| null | https://github.com/scrapy/scrapy/pull/307 | null | {'base_commit': '845c64b89df765ff5c015632c082b6472e61b7d3', 'files': [{'path': 'scrapy/commands/crawl.py', 'status': 'modified', 'Loc': {"('Command', 'process_options', 24)": {'add': [34]}}}, {'path': 'scrapy/commands/runspider.py', 'status': 'modified', 'Loc': {"('Command', 'process_options', 46)": {'add': [56]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/commands/crawl.py",
"scrapy/commands/runspider.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | 5f194202114fd38530c78299d51b6966b4802f59 | https://github.com/scrapy/scrapy/issues/5621 | Support for Twisted 22.8.0 | Twisted 22.8.0 was released recently, and it says:
> Twisted now works with Cryptography versions 37 and above, and as a result, its minimum TLS protocol version has been upgraded to TLSv1.2.
Consequently, tests on some envs, including 3.8, now fail because they install older cryptography. | null | https://github.com/scrapy/scrapy/pull/5632 | null | {'base_commit': '5f194202114fd38530c78299d51b6966b4802f59', 'files': [{'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23, 27, 29]}}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {"('CrawlerProcessSubprocess', 'test_reactor_default_twisted_reactor_select', 328)": {'mod': [330]}}}, {'path': 'tox.ini', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [76, 82, 84]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"setup.py"
],
"doc": [],
"test": [
"tests/test_crawler.py"
],
"config": [
"tox.ini"
],
"asset": []
} | 1 | |
scrapy | scrapy | 67ab8d4650c1e9212c9508803c7b5265e166cbaa | https://github.com/scrapy/scrapy/issues/6433 | core.engine/Signal handler polluting log | ### Description
The `OffsiteMiddleware` logs a single message for each domain filtered. Great!
But then the `core.engine` logs a message for every single url filtered by the OffsiteMiddleware.
(LOG_LEVEL: DEBUG)
The websites I am scraping have like 10 external links to twitter/youtube/etc in each page. For hundreds pages scrapped, the only thing I can see in the logs is `Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request`.
I don't know if this is intended behavior. If so, it is obviously not a bug.
But nonetheless, it is very different behavior compared to previous 1.x Scrapy versions. (I don't know when it has changed and I couldn't find anything in the release notes about that.)
If not a bug, maybe we could discuss the possibility of changing this behavior so we can have logs less polluted when debugging.
### Steps to Reproduce
#### Just run the following spider.
(url taken from another issue).
```python
import scrapy
class TestSpider(scrapy.spiders.CrawlSpider):
name = 'test'
allowed_domains = ['capybala.com']
start_urls = ['https://capybala.com/']
custom_settings = {
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor',
'LOG_LEVEL': 'DEBUG'
}
rules = (scrapy.spiders.Rule(scrapy.linkextractors.LinkExtractor(), callback='parse', follow=True),)
def parse(self, response):
print('noop')
```
#### Output:
```txt
2024-07-08 16:34:43 [scrapy.utils.log] INFO: Scrapy 2.11.2 started (bot: scrapybot)
2024-07-08 16:34:43 [scrapy.utils.log] INFO: Versions: lxml 5.2.2.0, libxml2 2.12.6, cssselect 1.2.0, parsel 1.9.1, w3lib 2.2.1, Twisted 24.3.0, Python 3.12.4 (main, Jul 3 2024, 16:55:58) [GCC 11.2.0], pyOpenSSL 24.1.0 (OpenSSL 3.2.2 4 Jun 2024), cryptography 42.0.8, Platform Linux-5.15.145-x86_64-AMD_Ryzen_9_5980HX_with_Radeon_Graphics-with-glibc2.33
2024-07-08 16:34:43 [scrapy.addons] INFO: Enabled addons:
[]
2024-07-08 16:34:43 [asyncio] DEBUG: Using selector: EpollSelector
2024-07-08 16:34:43 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2024-07-08 16:34:43 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.unix_events._UnixSelectorEventLoop
2024-07-08 16:34:43 [scrapy.extensions.telnet] INFO: Telnet Password: d2c4cce2938fba32
2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.memusage.MemoryUsage',
'scrapy.extensions.logstats.LogStats']
2024-07-08 16:34:43 [scrapy.crawler] INFO: Overridden settings:
{'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'SPIDER_LOADER_WARN_ONLY': True,
'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.offsite.OffsiteMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2024-07-08 16:34:43 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2024-07-08 16:34:43 [scrapy.core.engine] INFO: Spider opened
2024-07-08 16:34:43 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2024-07-08 16:34:43 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/> (referer: None)
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'bokuran.com': <GET https://bokuran.com/>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'webooker.info': <GET http://webooker.info/2013/10/ebook1-release/>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'ebook-1.com': <GET https://ebook-1.com/>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://ebook-1.com/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/> (referer: https://capybala.com/)
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'chrome.google.com': <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.downloadermiddlewares.offsite] DEBUG: Filtered offsite request to 'twitter.com': <GET https://twitter.com/orangain>
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/orangain> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/webooker_log> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/find-kindle-edition/> (referer: https://capybala.com/)
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/bokuran/> (referer: https://capybala.com/)
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/ebook-1/> (referer: https://capybala.com/)
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://capybala.com/dendrogram/> (referer: https://capybala.com/)
noop
2024-07-08 16:34:44 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://capybala.com/> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://ebook-1.com/> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/orangain> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://twitter.com/webooker_log> before it reached the scheduler.
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/> before it reached the scheduler.
noop
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://chrome.google.com/webstore/detail/find-ebook-edition/jhhpocdmfelpmobcnmjfppdpnbepkono> before it reached the scheduler.
noop
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET https://bokuran.com/> before it reached the scheduler.
noop
noop
2024-07-08 16:34:44 [scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET http://webooker.info/2013/10/ebook1-release/> before it reached the scheduler.
2024-07-08 16:34:45 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://tree.capybala.com/> (referer: https://capybala.com/)
noop
2024-07-08 16:34:45 [scrapy.core.engine] INFO: Closing spider (finished)
2024-07-08 16:34:45 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1735,
'downloader/request_count': 7,
'downloader/request_method_count/GET': 7,
'downloader/response_bytes': 17486,
'downloader/response_count': 7,
'downloader/response_status_count/200': 7,
'dupefilter/filtered': 16,
'elapsed_time_seconds': 1.950522,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2024, 7, 8, 19, 34, 45, 376469, tzinfo=datetime.timezone.utc),
'httpcompression/response_bytes': 29892,
'httpcompression/response_count': 7,
'log_count/DEBUG': 33,
'log_count/INFO': 10,
'memusage/max': 70103040,
'memusage/startup': 70103040,
'offsite/domains': 5,
'offsite/filtered': 17,
'request_depth_max': 2,
'response_received_count': 7,
'scheduler/dequeued': 7,
'scheduler/dequeued/memory': 7,
'scheduler/enqueued': 7,
'scheduler/enqueued/memory': 7,
'start_time': datetime.datetime(2024, 7, 8, 19, 34, 43, 425947, tzinfo=datetime.timezone.utc)}
2024-07-08 16:34:45 [scrapy.core.engine] INFO: Spider closed (finished)
```
**Expected behavior:**
I was not expecting to see so many `[scrapy.core.engine] DEBUG: Signal handler scrapy.downloadermiddlewares.offsite.OffsiteMiddleware.request_scheduled dropped request <GET [...]> before it reached the scheduler.` messages. I believe just the messages given by the OffsiteMiddleware are enough.
**Actual behavior:**
There are **a lot** of "dropped request" messages.
Furthermore the same message is replicated several times if the same url is found more than one time. (e.g. https://twitter.com/orangain or https://twitter.com/webooker_log in the previous log)
**Reproduces how often:** always
### Versions
$ scrapy version --verbose
Scrapy : 2.11.2
lxml : 5.2.2.0
libxml2 : 2.12.6
cssselect : 1.2.0
parsel : 1.9.1
w3lib : 2.2.1
Twisted : 24.3.0
Python : 3.12.4 (main, Jul 3 2024, 16:55:58) [GCC 11.2.0]
pyOpenSSL : 24.1.0 (OpenSSL 3.2.2 4 Jun 2024)
cryptography : 42.0.8
Platform : Linux-5.15.145-x86_64-AMD_Ryzen_9_5980HX_with_Radeon_Graphics-with-glibc2.33
### Additional context
I believe this has nothing to do with the `CrawlSpider`, but that is what I am using. | null | https://github.com/scrapy/scrapy/pull/6475 | null | {'base_commit': '67ab8d4650c1e9212c9508803c7b5265e166cbaa', 'files': [{'path': 'scrapy/core/engine.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [42]}, "('ExecutionEngine', '_schedule_request', 319)": {'mod': [328, 329, 330, 331]}}}, {'path': 'tests/test_engine.py', 'status': 'modified', 'Loc': {"(None, 'test_request_scheduled_signal', 474)": {'mod': [502]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/core/engine.py"
],
"doc": [],
"test": [
"tests/test_engine.py"
],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | d60b4edd11436e61284615ec7ce89f8ac7e46d9a | https://github.com/scrapy/scrapy/issues/5857 | bug
good first issue
https | TLS logging broken with new cryptography | https://github.com/pyca/cryptography/pull/8391 dropped `SSL_get_server_tmp_key()` so we need to disable the code that uses it if it's not available. | null | https://github.com/scrapy/scrapy/pull/5858 | null | {'base_commit': 'd60b4edd11436e61284615ec7ce89f8ac7e46d9a', 'files': [{'path': 'scrapy/utils/ssl.py', 'status': 'modified', 'Loc': {"(None, 'get_temp_key_info', 21)": {'add': [22]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/utils/ssl.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 5fac2d7b90da8f06597df8536bbadd6cadef5d7e | https://github.com/scrapy/scrapy/issues/509 | Ubuntu repositories | Current repositories setup requires maintaining an apt repository per codename (precise, quantal, raring, saucy, trusty,...) and it bugs us on every new ubuntu release.
The true is that we build debian packages on a Precise host and upload the same package to all repositories. There is a legacy reason for using multiples repos per codename, we started publishing and building debian packages in Lucid (Python 2.6), when Precise arrived we had to build for Python 2.7. Lucid packages were published for Karmic, Maverick and Natty, while Precise for the others. There was also the ubuntu switch to Upstart that affected Scrapyd packaging by that time.
There are two ideas flowing around:
1. Unify repositories and install instructions to:
```
deb http://archive.scrapy.org/ubuntu scrapy main
```
2. Move repositories to Ubuntu PPA managed by Scrapy team.
option (1) is simple and will work as far as Python2.7 is available in ubuntu.
option (2) has the advantage that a new debian package is built per codename, and we don't rely on ScrapingHub infrastructure to build and distribute debs.
I intentionally left out the discussion about renaming `scrapy-VERSION` to `scrapy`, but it may be related if we want to publish oldstable/stable/trunk versions under the same name but in different repository _components_.
| null | https://github.com/scrapy/scrapy/pull/549 | null | {'base_commit': '7f30a671c3ae545417b627e314688058699b3ffa', 'files': [{'path': 'docs/topics/ubuntu.rst', 'status': 'modified', 'Loc': {'(None, None, 14)': {'mod': [14, 15, 16]}, '(None, None, 18)': {'mod': [18]}, '(None, None, 20)': {'mod': [20, 21]}, '(None, None, 23)': {'mod': [23]}, '(None, None, 25)': {'mod': [25]}, '(None, None, 27)': {'mod': [27]}, '(None, None, 29)': {'mod': [29]}, '(None, None, 31)': {'mod': [31]}, '(None, None, 33)': {'mod': [33, 35, 37, 39, 40, 41, 43, 44, 46]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/topics/ubuntu.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | 0ee04e1e91f42d7fdd69f20b00a06e7856cdc919 | https://github.com/scrapy/scrapy/issues/4336 | enhancement
docs | Needs change on "Example of shell session" in Scrapy 1.8.0 docs | ### Description
I was learning how to use Scrapy shell but got error similar with this issue #3314, and got the solution in the issue as well. But, when I looked back into the Docs (1.8.0), the example still use (') instead of ("). I think it is better to change it, especially for next learners like me.
**Expected behavior:**
*Example of shell session*
...
scrapy shell "https://scrapy.org" --nolog
**Actual behavior:**
*Example of shell session*
...
scrapy shell 'https://scrapy.org' --nolog
| null | https://github.com/scrapy/scrapy/pull/4450 | null | {'base_commit': '0ee04e1e91f42d7fdd69f20b00a06e7856cdc919', 'files': [{'path': 'docs/topics/shell.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [158]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/topics/shell.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | d43a35735a062a4260b002cfbcd3236c77ef9399 | https://github.com/scrapy/scrapy/issues/951 | bug | Extraction of gzipped sitemap fails in Scrapy 0.24.4 | retrieving a gzipped sitemap xml (tested on amazon.de) fails.
Reproduce with :
modify /utils/gz.py gunzip method to write the incoming data to a file.
gunzip the file on the command line.
the unzipped file contains garbled content
gunzip that file with garbled content a second time and get the correct content
-> I suspect that the content coming from the target server is already gzip compressed and scrapy has a bug that causes the gzip decompression to not work properly, resulting in a double compressed file arriving at the /utils/gz.py gunzip method
| null | https://github.com/scrapy/scrapy/pull/2065 | null | {'base_commit': 'd43a35735a062a4260b002cfbcd3236c77ef9399', 'files': [{'path': 'scrapy/utils/gz.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [53]}, "(None, 'is_gzipped', 55)": {'mod': [58]}}}, {'path': 'tests/test_downloadermiddleware_httpcompression.py', 'status': 'modified', 'Loc': {"('HttpCompressionTest', None, 22)": {'add': [147]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/utils/gz.py"
],
"doc": [],
"test": [
"tests/test_downloadermiddleware_httpcompression.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 451f1474689a18d6a54630915c42172626624ef7 | https://github.com/scrapy/scrapy/issues/2145 | bug
in progress | Disabling RedirectMiddleware results in HttpCompressionMiddleware errors | I wanted not to redirect `303` responses, but instead retry them.
From the docs, I thought I could achieve it through two settings:
```
REDIRECT_ENABLED = False
RETRY_HTTP_CODES = [301, 302, 307, 308, 500, 502, 503, 504, 408]
```
It ended up giving me errors on `HttpCompressionMiddleware`:
```
Traceback (most recent call last):
File "twisted/internet/defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
File "scrapy/core/downloader/middleware.py", line 53, in process_response
spider=spider)
File "scrapy/downloadermiddlewares/httpcompression.py", line 38, in process_response
response = response.replace(**kwargs)
File "scrapy/http/response/text.py", line 50, in replace
return Response.replace(self, *args, **kwargs)
File "scrapy/http/response/__init__.py", line 77, in replace
return cls(*args, **kwargs)
TypeError: __init__() got an unexpected keyword argument 'encoding'
``` | null | https://github.com/scrapy/scrapy/pull/2393 | null | {'base_commit': '451f1474689a18d6a54630915c42172626624ef7', 'files': [{'path': 'scrapy/downloadermiddlewares/httpcompression.py', 'status': 'modified', 'Loc': {"('HttpCompressionMiddleware', 'process_response', 31)": {'mod': [41]}}}, {'path': 'tests/test_downloadermiddleware_httpcompression.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}, "('HttpCompressionTest', None, 25)": {'add': [154]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/downloadermiddlewares/httpcompression.py"
],
"doc": [],
"test": [
"tests/test_downloadermiddleware_httpcompression.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 4626e90df8ba4a945bb9cd6be47a915788e76f23 | https://github.com/scrapy/scrapy/issues/3871 | cleanup | Deprecate hacky code from get_project_settings() | [Reported](https://github.com/scrapy/scrapy/pull/3859#issuecomment-510838622) by @nyov:
> @kmike, would you or someone perhaps also find time to correctly deprecate this (or just rip it > out)?: https://github.com/scrapy/scrapy/blob/9c90d9515a50ede29415b8b5d6ba11229f333b49/scrapy/utils/project.py#L70-L79
> Or is it still needed. | null | https://github.com/scrapy/scrapy/pull/3910 | null | {'base_commit': '9c514b976ffdf069b81c4b7728a7e8e531710680', 'files': [{'path': 'scrapy/utils/project.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "(None, 'get_project_settings', 60)": {'add': [72]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/utils/project.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | c340e72988fc6ec615b7b9851c3d28c16c26a839 | https://github.com/scrapy/scrapy/issues/4802 | bug
upstream issue | CachingHostnameResolver does not work with reactor.resolve() | ### Description
Hi. Thank you for maintaining this awesome software :)
I am working on a project using scrapy that implements a custom downloader class ([link](https://github.com/michael-lazar/mozz-archiver/blob/master/mozz_archiver/downloaders.py)).
I want to resolve IPv6 addresses, and I found the section in the documentation about the ``DNS_RESOLVER`` setting that was added in #4227. I tried enabling the new ``DNS_RESOLVER = "scrapy.resolver.CachingHostnameResolver"`` and was immediately greeted with this exception.
```
Unhandled Error
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/scrapy/commands/crawl.py", line 27, in run
self.crawler_process.start()
File "/usr/local/lib/python3.8/site-packages/scrapy/crawler.py", line 327, in start
reactor.run(installSignalHandlers=False) # blocking call
File "/usr/local/lib/python3.8/site-packages/twisted/internet/base.py", line 1283, in run
self.mainLoop()
File "/usr/local/lib/python3.8/site-packages/twisted/internet/base.py", line 1292, in mainLoop
self.runUntilCurrent()
--- <exception caught here> ---
File "/usr/local/lib/python3.8/site-packages/twisted/internet/base.py", line 913, in runUntilCurrent
call.func(*call.args, **call.kw)
File "/usr/local/lib/python3.8/site-packages/twisted/internet/tcp.py", line 449, in resolveAddress
d = self.reactor.resolve(self.addr[0])
File "/usr/local/lib/python3.8/site-packages/twisted/internet/base.py", line 638, in resolve
return self.resolver.getHostByName(name, timeout)
File "/usr/local/lib/python3.8/site-packages/twisted/internet/_resolver.py", line 277, in getHostByName
self._nameResolver.resolveHostName(FirstOneWins(result), name, 0,
File "/usr/local/lib/python3.8/site-packages/scrapy/resolver.py", line 80, in resolveHostName
class CachingResolutionReceiver(resolutionReceiver):
builtins.TypeError: __init__() takes 2 positional arguments but 4 were given
```
### Steps to Reproduce
This is also reproducible using the bundled FTP downloader
1. ``scrapy startproject scrapy_test``
2. ``scrapy genspider example mozz.us``
3. Add ``DNS_RESOLVER = "scrapy.resolver.CachingHostnameResolver"`` to the settings file
4. Change the spider start_url to ``ftp://mozz.us``
5. ``scrapy crawl scrapy_test``
### Versions
```
Scrapy : 2.3.0
lxml : 4.5.2.0
libxml2 : 2.9.10
cssselect : 1.1.0
parsel : 1.6.0
w3lib : 1.22.0
Twisted : 20.3.0
Python : 3.8.5 (default, Jul 21 2020, 10:48:26) - [Clang 11.0.3 (clang-1103.0.32.62)]
pyOpenSSL : 19.1.0 (OpenSSL 1.1.1g 21 Apr 2020)
cryptography : 3.0
Platform : macOS-10.15.6-x86_64-i386-64bit
```
### Additional context
This was a tricky one to debug because everything works as expected with the HTTP Agent downloader. This issue only appears when you implement a downloader that depends on calling ``reactor.resolve()`` directly without using ``twisted.internet.endpoints.HostnameEndpoint``.
I discovered that in the twisted [IHostnameResolver](https://twistedmatrix.com/documents/current/api/twisted.internet.interfaces.IHostnameResolver.html) interface, the ``resolutionReceiver`` method argument is expected to be an *instance* of a resolution receiver class, and not a *type* of a resolution receiver class. So I believe the scrapy code below is incorrect:
https://github.com/scrapy/scrapy/blob/5e997587d9b13344a0afa9bb4cf781829a66ce23/scrapy/resolver.py#L76-L80
The subclass here only works with the Scrapy Agent because the ``HostnameEndpoint`` does this weird thing where it defines a class with only static methods, so it can pass the class itself instead of instantiating it.
https://github.com/twisted/twisted/blob/22f949f7ce187513f0c218b73186c8a73baa00b4/src/twisted/internet/endpoints.py#L942-L958
```python
@provider(IResolutionReceiver)
class EndpointReceiver:
@staticmethod
def resolutionBegan(resolutionInProgress):
pass
@staticmethod
def addressResolved(address):
addresses.append(address)
@staticmethod
def resolutionComplete():
d.callback(addresses)
self._nameResolver.resolveHostName(
EndpointReceiver, self._hostText, portNumber=self._port
)
```
However, there are other places in the twisted reactor where twisted does pass an object instance directly to this method.
https://github.com/twisted/twisted/blob/7e3ce790ca9f004ab386f9ecbba8f505d66cd3bd/src/twisted/internet/_resolver.py#L307
```python
result = Deferred()
self._nameResolver.resolveHostName(FirstOneWins(result), name, 0, [IPv4Address])
return result
```
| null | https://github.com/scrapy/scrapy/pull/4803 | null | {'base_commit': 'c340e72988fc6ec615b7b9851c3d28c16c26a839', 'files': [{'path': 'scrapy/resolver.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [52], 'mod': [3]}, "('CachingHostnameResolver', 'resolveHostName', 76)": {'add': [105], 'mod': [97, 100, 104]}, "('CachingHostnameResolver', None, 54)": {'mod': [76, 77, 79, 80, 82, 83, 84, 85, 87, 88, 89, 91, 92, 93, 94]}}}, {'path': 'tests/CrawlerProcess/alternative_name_resolver.py', 'status': 'removed', 'Loc': {}}, {'path': 'tests/CrawlerProcess/default_name_resolver.py', 'status': 'modified', 'Loc': {"('IPv6Spider', None, 5)": {'add': [5]}, '(None, None, None)': {'mod': [10, 11, 12]}}}, {'path': 'tests/test_crawler.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [24]}, "('CrawlerProcessSubprocess', None, 292)": {'add': [328], 'mod': [324, 325, 326]}, "('ScriptRunnerMixin', None, 282)": {'mod': [283]}, "('ScriptRunnerMixin', 'run_script', 283)": {'mod': [285]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/resolver.py",
"tests/CrawlerProcess/alternative_name_resolver.py",
"tests/CrawlerProcess/default_name_resolver.py"
],
"doc": [],
"test": [
"tests/test_crawler.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | eb49b459c18fc78709267803582376692519e224 | https://github.com/scrapy/scrapy/issues/2076 | install | Ubuntu official repository - installation failed (not able to find python-support) | I used the [manual](http://doc.scrapy.org/en/latest/topics/ubuntu.html) to install scrapy on ubuntu 16.04, but it failed because it was not able to install python-support (>= 0.90.0). Other sources report that this package is not part of the new ubuntu xenial anymore.
Quick&Dirty-Workaround:
```
wget http://launchpadlibrarian.net/109052632/python-support_1.0.15_all.deb
sudo dpkg -i python-support_1.0.15_all.deb
sudo apt-get update && sudo apt-get install scrapy
```
- http://askubuntu.com/questions/766169/why-no-more-python-support-in-16-04
- https://launchpad.net/ubuntu/xenial/amd64/python-support/1.0.15
| null | https://github.com/scrapy/scrapy/pull/2267 | null | {'base_commit': 'eb49b459c18fc78709267803582376692519e224', 'files': [{'path': 'docs/intro/install.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [39, 51, 176], 'mod': [10, 12, 14, 16, 17, 18, 20, 21, 23, 24, 26, 27, 29, 31, 33, 34, 36, 37, 41, 42, 44, 45, 47, 49, 92, 93, 98, 99, 100, 102, 103, 104, 108, 110, 112, 114, 115, 117, 118, 120, 122, 179, 182, 187]}}}, {'path': 'docs/topics/ubuntu.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/intro/install.rst",
"docs/topics/ubuntu.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | b0eaf114e5ebe1c5f38a56ed23fcd0515f34d048 | https://github.com/scrapy/scrapy/issues/1403 | bug | Exception in LxmLinkExtractor.extract_links 'charmap' codec can't encode character | ```
Stacktrace (most recent call last):
File "scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
File "scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output
for x in result:
File "scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
return (_set_referer(r) for r in result or ())
File "scrapy/spidermiddlewares/offsite.py", line 28, in process_spider_output
for x in result:
File "scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
return (r for r in result or () if _filter(r))
File "scrapy/spidermiddlewares/depth.py", line 54, in <genexpr>
return (r for r in result or () if _filter(r))
File "scrapy/spiders/crawl.py", line 69, in _parse_response
for requests_or_item in iterate_spider_output(cb_res):
File "ex_link_crawl/spiders/external_link_spider.py", line 45, in parse_obj
for link in LxmlLinkExtractor(allow=(), deny=self.allowed_domains).extract_links(response):
File "scrapy/linkextractors/lxmlhtml.py", line 108, in extract_links
links = self._extract_links(doc, response.url, response.encoding, base_url)
File "scrapy/linkextractors/__init__.py", line 103, in _extract_links
return self.link_extractor._extract_links(*args, **kwargs)
File "scrapy/linkextractors/lxmlhtml.py", line 57, in _extract_links
url = url.encode(response_encoding)
File "encodings/cp1252.py", line 12, in encode
return codecs.charmap_encode(input,errors,encoding_table)
```
My use of extractor is following:
```
def parse_obj(self, response):
if not isinstance(response, HtmlResponse):
return
for link in LxmlLinkExtractor(allow=(), deny=self.allowed_domains).extract_links(response):
if not link.nofollow:
yield LinkCrawlItem(domain=link.url)
```
| null | https://github.com/scrapy/scrapy/pull/4321 | null | {'base_commit': 'b0eaf114e5ebe1c5f38a56ed23fcd0515f34d048', 'files': [{'path': 'scrapy/linkextractors/lxmlhtml.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8, 12]}, "('LxmlParserLinkExtractor', '_extract_links', 54)": {'mod': [69]}}}, {'path': 'tests/test_linkextractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}, "('LinkExtractorTestCase', None, 17)": {'mod': [19]}, "('LinkExtractorTestCase', 'test_extract_all_links', 31)": {'mod': [33, 34, 35, 36]}, "('LinkExtractorTestCase', 'test_restrict_xpaths_with_html_entities', 212)": {'mod': [217]}, "('LinkExtractorTestCase', 'test_attrs', 311)": {'mod': [313, 314, 315, 316]}, "('LxmlLinkExtractorTestCase', None, 469)": {'mod': [509]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/linkextractors/lxmlhtml.py"
],
"doc": [],
"test": [
"tests/test_linkextractors.py"
],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | 69398fa148603a1cf0c84fbe2fd5d59daf9caa0c | https://github.com/scrapy/scrapy/issues/1487 | enhancement
good first issue | Set `scrapy shell name.tld` default scheme to http | I propose default scheme for sites be set to http:// when using scrapy shell. Like how browsers work.
`scrapy shell yahoo.com` fails but should work.
issue label = trivial
| null | https://github.com/scrapy/scrapy/pull/1498 | null | {'base_commit': '69398fa148603a1cf0c84fbe2fd5d59daf9caa0c', 'files': [{'path': 'scrapy/commands/shell.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}, "('Command', 'run', 42)": {'add': [43]}}}, {'path': 'scrapy/utils/url.py', 'status': 'modified', 'Loc': {"(None, 'escape_ajax', 86)": {'add': [112]}}}, {'path': 'tests/test_utils_url.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [189], 'mod': [7]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/commands/shell.py",
"scrapy/utils/url.py"
],
"doc": [],
"test": [
"tests/test_utils_url.py"
],
"config": [],
"asset": []
} | 1 |
psf | requests | 5a41febce249e7b74eb37ba7914998ff08321c38 | https://github.com/psf/requests/issues/3633 | HTTPS requests through proxies in proposed/3.0.0 aren't configured correctly | In current master:
```
>>> import requests
>>> requests.__version__
'2.11.1'
>>> session = requests.Session()
>>> r = session.get('https://www.jcline.org/', verify=True, proxies={'http': 'http://vagrant:vagrant@localhost:3128', 'https': 'http://vagrant:vagrant@localhost:3128'})
>>>
```
In current proposed/3.0.0:
```
>>> import requests
>>> requests.__version__
'3.0.0'
>>> session = requests.Session()
>>> r = session.get('https://www.jcline.org/', verify=True, proxies={'http': 'http://vagrant:vagrant@localhost:3128', 'https': 'http://vagrant:vagrant@localhost:3128'})
requests/packages/urllib3/connectionpool.py:838: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/security.html
InsecureRequestWarning)
>>>
```
This is a problem I introduced in https://github.com/kennethreitz/requests/pull/3109 :disappointed:. What happens right now is if a request is _not_ through a proxy and it's HTTPS, the urllib3 pool manager's `connection_pool_kw` are updated before requesting a new connection using [requests.adapters.HTTPAdapter._update_poolmanager_ssl_kw](https://github.com/kennethreitz/requests/blob/proposed/3.0.0/requests/adapters.py#L204). If it _is_ through a proxy, the keywords aren't updated and the request is made with the default settings for urllib3.
To me, the most appealing way to fix this is to add a keyword argument, `connection_kwargs` or something, to all the `urllib3.poolmanager.PoolManager.connection_from_*` methods that is either merged into `connection_pool_kw` or overrides them. That way `urllib3` can handle getting the connection pool with the new kwargs in a thread-safe manner. Currently, `requests` has to manage updating the keys and getting the new connection pool with a lock. It seems like that would be better in `urllib3`.
The other option is to patch up what's currently in `HTTPAdapter` so it handles updating the proxy manager or plain pool manager based on whether proxies are in use.
What do people think?
| null | https://github.com/psf/requests/pull/4173 | null | {'base_commit': 'f3cdbcb86d9535f054f56d937e29293cebc3c55d', 'files': [{'path': 'requests/adapters.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [55], 'mod': [13, 14, 15, 16]}, "('HTTPAdapter', '__init__', 114)": {'mod': [129]}, "('HTTPAdapter', '__setstate__', 137)": {'mod': [142]}, "('HTTPAdapter', None, 85)": {'mod': [207, 208, 209, 210, 211, 213, 214, 215, 216, 217, 218, 219, 220, 221, 223, 225, 226, 227, 229, 230, 232, 233, 234, 236, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 249, 250, 251, 252, 253, 254, 255, 257, 258, 259, 260, 261, 262, 263, 264]}, "('HTTPAdapter', 'get_connection', 303)": {'mod': [312, 313, 314, 316, 318, 319, 320, 321, 322, 323, 324, 325, 326]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33]}, "('TestPreparingURLs', 'test_parameters_for_nonstandard_schemes', 2760)": {'add': [2767]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/adapters.py"
],
"doc": [],
"test": [
"tests/test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
psf | requests | 4e89ba707714e3b58a46c2ed9e220cff8b7f1e6a | https://github.com/psf/requests/issues/2872 | Post request hangs in certain cases when body is a StringIO | This is related to a report for the [Dropbox Python SDK](https://github.com/dropbox/dropbox-sdk-python/issues/27).
The following hangs:
```
from StringIO import StringIO
s = StringIO()
s.write('hello') # This is seeked to the end
requests.post('http://www.google.com', data=s) # Hangs: A success would be a 405 error
```
After a cursory look, it looks like the request isn't fully formed so the server doesn't attempt to send a response which leaves the client hanging.
If we call `s.seek(0)`, this works. A bit more counterintuitively, this also works:
```
requests.post('http://www.google.com', data=StringIO())
```
| null | https://github.com/psf/requests/pull/2873 | null | {'base_commit': '4e89ba707714e3b58a46c2ed9e220cff8b7f1e6a', 'files': [{'path': 'requests/utils.py', 'status': 'modified', 'Loc': {"(None, 'super_len', 50)": {'add': [50], 'mod': [52, 54, 55, 57, 63, 78, 80, 81, 82]}}}, {'path': 'test_requests.py', 'status': 'modified', 'Loc': {"('UtilsTestCase', None, 1330)": {'add': [1353]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/utils.py"
],
"doc": [],
"test": [
"test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
psf | requests | 56ecdebcc507c71f2386d3bf2ea14db2d27cc834 | https://github.com/psf/requests/issues/2756 | Bug
Contributor Friendly | Json supersedes data in prepare_body | When not a stream, json supersedes data in prepare_body:
https://github.com/kennethreitz/requests/blob/f5dacf84468ab7e0631cc61a3f1431a32e3e143c/requests/models.py#L446
This conflicts with the docstring, which indicates that json is only used when data is not specified:
https://github.com/kennethreitz/requests/blob/f5dacf84468ab7e0631cc61a3f1431a32e3e143c/requests/models.py#L195
| null | https://github.com/psf/requests/pull/2763 | null | {'base_commit': '56ecdebcc507c71f2386d3bf2ea14db2d27cc834', 'files': [{'path': 'requests/models.py', 'status': 'modified', 'Loc': {"('PreparedRequest', 'prepare_body', 406)": {'mod': [417, 446]}}}, {'path': 'test_requests.py', 'status': 'modified', 'Loc': {"('RequestsTestCase', None, 59)": {'add': [1064]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/models.py"
],
"doc": [],
"test": [
"test_requests.py"
],
"config": [],
"asset": []
} | 1 |
psf | requests | 1c52d15d9772e459add567cbdc9d38a284a8d939 | https://github.com/psf/requests/issues/1882 | ResourceWarning in python 3.2+ | Requests issues a ResourceWarning in python 3.2+ as sockets are not explicitly closed before garbage collection occurs. While ResourceWarnings are not displayed by default, it can be a distraction to some developers when working with warnings enabled.
File: test.py
``` python
import requests
def make_request():
resp = requests.get('http://google.com')
resp.close() # this appears to have no effect, even though the function exists
make_request()
```
```
$ python -Wall test.py
test.py:7: ResourceWarning: unclosed <socket.socket object, fd=4, family=2, type=1, proto=6>
make_request()
test.py:7: ResourceWarning: unclosed <socket.socket object, fd=3, family=2, type=1, proto=6>
make_request()
```
It would be great if there was a way to prevent the ResourceWarning from occurring, without issuing a `Connection:close` header.
| null | https://github.com/psf/requests/pull/2326 | null | {'base_commit': '1c52d15d9772e459add567cbdc9d38a284a8d939', 'files': [{'path': 'requests/api.py', 'status': 'modified', 'Loc': {"(None, 'request', 17)": {'mod': [49]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/api.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
psf | requests | be62645dd56580dd7576032b348cf79d880851d8 | https://github.com/psf/requests/issues/1208 | Not possible to specify max_retries in v1.X? | In older versions of requests (pre v1.0), I was able to do:
```
requests.get('http://nonexistentdomainfoobar.com', config={"max_retries":10})
```
as far as I can tell, this isn't possible in v.1.0+. `HTTPAdapter.max_retries` uses `DEFAULT_RETRIES` and there's no way to change this.
Would it be possible to restore this feature? If not, perhaps a note in the FAQ informing users that this isn't possible and they'll have to write a loop themselves?
| null | https://github.com/psf/requests/pull/1219 | null | {'base_commit': 'be62645dd56580dd7576032b348cf79d880851d8', 'files': [{'path': 'AUTHORS.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [123]}}}, {'path': 'requests/adapters.py', 'status': 'modified', 'Loc': {"('HTTPAdapter', '__init__', 47)": {'mod': [48]}, "('HTTPAdapter', None, 45)": {'mod': [169]}, "('HTTPAdapter', 'send', 169)": {'mod': [191]}}}, {'path': 'requests/api.py', 'status': 'modified', 'Loc': {"(None, 'request', 17)": {'add': [29]}}}, {'path': 'requests/sessions.py', 'status': 'modified', 'Loc': {"('SessionRedirectMixin', 'resolve_redirects', 82)": {'add': [151], 'mod': [83]}, "('Session', 'request', 232)": {'add': [239, 306]}, "('Session', 'send', 389)": {'add': [401], 'mod': [422]}}}, {'path': 'test_requests.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [356]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/sessions.py",
"requests/adapters.py",
"requests/api.py"
],
"doc": [
"AUTHORS.rst"
],
"test": [
"test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
psf | requests | 1642996798416efaca754e4678506502e4c4c1f3 | https://github.com/psf/requests/issues/1228 | Problem with missing cookies after redirect | I sent this by e-mail - no response. I think this might be of interest to others:
> I have a problem when connecting to a site. Here's the scenario:
>
> 1) I enter a login page, which has a form
> 2) I send (using Requests) a POST with the username, pw, etc.
> (This POST includes the SESSIONID)
> 3) The webpage with a 302,
> 4) To which requests does automatically a GET to the new address
> 5) In Firefox, this works, In Requests, I get redirected to the
> login - page (with another 302).
>
> The only important difference I can detect is that in point 4),
> Firefox repeats automatically the SESSION ID, which Requests does
> not do. Can I enable this?
I solved the problem by disabling automatic redirects, and creating
a new request manually, with the sessionid cookie. Now the process
runs successfully.
This confirms the necessity of the repeating the cookie in the
request after the 302, but it defeat the 'neatness' of the auto
redirects.
Cheers,
John
| null | https://github.com/psf/requests/pull/1239 | null | {'base_commit': '1642996798416efaca754e4678506502e4c4c1f3', 'files': [{'path': 'requests/sessions.py', 'status': 'modified', 'Loc': {"('SessionRedirectMixin', 'resolve_redirects', 82)": {'mod': [93]}}}, {'path': 'test_requests.py', 'status': 'modified', 'Loc': {"('RequestsTestCase', None, 29)": {'add': [120]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/sessions.py"
],
"doc": [],
"test": [
"test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
psf | requests | 4683f169909857d663275346655975af7190fd62 | https://github.com/psf/requests/issues/1979 | Authentication Handlers lost on redirect. | I'am trying to use the requests library by making a redirection with Digest authentication method, but the response is 401. I mention that it works with basic authentication. I've captured the packets with wireshark, and noticed that the first HTTP request is without the Authorization header, the 401 unauthorized answered is received, and after that the traffic continues as it should be, the Authorization header is added, the 302 answer is received, and after that with the https cyphers exchange. I don't know why the requests.send method returns 401.
| null | https://github.com/psf/requests/pull/2253 | null | {'base_commit': 'a718a81d273503bd2ffae8e6cb036a8516eb426a', 'files': [{'path': 'requests/auth.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [19]}, "('HTTPDigestAuth', None, 60)": {'add': [152]}, "('HTTPDigestAuth', '__call__', 188)": {'add': [196]}, "('HTTPDigestAuth', 'handle_401', 153)": {'mod': [185]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/auth.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
psf | requests | 1c2022cf868cb503815f34901ad8e85cf524d01a | https://github.com/psf/requests/issues/4239 | Feature Request
Contributor Friendly | Add header name to InvalidHeader exception message | requests.get('http://example.com', headers={'foo': 1})
requests.exceptions.InvalidHeader: Header value 1 must be of type str or bytes, not <class 'int'>
It would be good to add the name of the bad header to make it easier
to track this down in large bodies of code. Something like:
requests.exceptions.InvalidHeader: Header foo value 1 must be of type str or bytes, not <class 'int'>
Thanks.
Summary.
## Expected Result
What you expected.
## Actual Result
What happened instead.
## Reproduction Steps
```python
import requests
```
## System Information
$ python -m requests.help
```
<paste here>
```
This command is only available on Requests v2.16.4 and greater. Otherwise,
please provide some basic information about your system (Python version,
operating system, &c).e | null | https://github.com/psf/requests/pull/4240 | null | {'base_commit': '1c2022cf868cb503815f34901ad8e85cf524d01a', 'files': [{'path': 'HISTORY.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}}}, {'path': 'requests/utils.py', 'status': 'modified', 'Loc': {"(None, 'check_header_validity', 854)": {'mod': [871, 872]}}}, {'path': 'tests/test_requests.py', 'status': 'modified', 'Loc': {"('TestRequests', 'test_header_value_not_str', 1395)": {'add': [1405, 1408, 1411], 'mod': [1404, 1407, 1410]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/utils.py"
],
"doc": [
"HISTORY.rst"
],
"test": [
"tests/test_requests.py"
],
"config": [],
"asset": []
} | 1 |
psf | requests | 0192aac24123735b3eaf9b08df46429bb770c283 | https://github.com/psf/requests/issues/2876 | Needs BDFL Input
Propose Close | Exception messages | As a user I would like it to be easy to generate simple helpful messages upon an exception. A common way this is done in is to simply cast the exception to a string. However, with requests, the result is often something you don't want to show an end user. For example:
``` python
try:
downloaded = requests.get(url)
except (requests.Timeout) as err:
print(str(err))
```
Results in the following message to the user:
```
HTTPSConnectionPool(host='cal.example.com', port=443): Max retries exceeded with url: /ken/ken.ics/00832974-ffb3-42ea-ba3e-84ba3c0a30f6.ics (Caused by ConnectTimeoutError(<requests.packages.urllib3.connection.VerifiedHTTPSConnection object at 0x7fd4644ef400>, 'Connection to cal.example.com timed out. (connect timeout=0.1)'))
```
There is useful information in this message, but it is not easily user accessible and is rather intimidating for end users. The information is probably available in the exception itself, but it is not clear how to get it. Also, it seems like accessing it would likely be different for each type of exception, which greatly increases the complexity of catching and reporting exceptions.
What I would expect is something like::
```
Connection to cal.example.com timed out.
```
It would be very helpful if there were an easy way to generate user friendly error messages from requests exceptions. If there is such a way, I have not been able to find it. Thus, I suggest it be added to the otherwise excellent introduction to requests. If there is not such a way, I would like to to suggest that it be added.
| null | https://github.com/certbot/certbot/pull/4733 | null | {'base_commit': '0192aac24123735b3eaf9b08df46429bb770c283', 'files': [{'path': 'requests/sessions.py', 'status': 'modified', 'Loc': {"('Session', 'prepare_request', 417)": {'mod': [423]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/sessions.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
psf | requests | 6506dc8f66a3fa82085e9d2e37d6f45d07345b80 | https://github.com/psf/requests/issues/2866 | Better testing for chunked uploads. | We have a whole code branch that does chunked uploads that is not and has never been tested. That's a problem, because sometimes it breaks and we don't notice (#2861). I'd like to add more testing for chunked uploads.
However, we can't use httpbin to do this testing. This is because the WSGI spec staunchly refuses to let applications see chunked transfer encoding at the app layer. For this reason, to test chunked transfer encoding requires a new style of testing, one that doesn't involve running a WSGI server but instead running something that can see the bytes _as they hit the wire_. Basically, something like the socket level tests of urllib3.
| null | https://github.com/psf/requests/pull/2897 | null | {'base_commit': '46184236dc177fb68c7863445609149d0ac243ea', 'files': []} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
psf | requests | e23bf10cf4ecc62f6c3dd6284043516fb833d9ce | https://github.com/psf/requests/issues/2411 | Requests 2.5.1 doesn't recognize unicode filenames for uploads | After merge of https://github.com/kennethreitz/requests/pull/2379, to allow filenames to be `int` types, unicode filenames are no longer recognized under Python 2.
This checks that the filename is a `builtin` `str`, which has different behaviour on Python 2 and Python 3:
`requests/utils.py:118: if name and isinstance(name, builtin_str) and name[0] != '<' and name[-1] != '>':`
In `requests/compat.py`, `builtin_str` is defines as `str`, which is non-unicode `bytes` in Python 2 and unicode in Python 3. Perhaps the check should be against basestring, or is this change in behaviour intended?
| null | https://github.com/psf/requests/pull/2413 | null | {'base_commit': 'd2d576b6b1101e2871c82f63adf2c2b534c2dabc', 'files': [{'path': 'requests/compat.py', 'status': 'modified', 'Loc': {}}, {'path': 'requests/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [28]}, "(None, 'guess_filename', 115)": {'mod': [118]}}}, {'path': 'test_requests.py', 'status': 'modified', 'Loc': {"('UtilsTestCase', None, 1223)": {'add': [1267]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/utils.py",
"requests/compat.py"
],
"doc": [],
"test": [
"test_requests.py"
],
"config": [],
"asset": []
} | 1 | |
psf | requests | 9473f15909fb3f2329247812e0d3c661421ceafc | https://github.com/psf/requests/issues/1397 | Bug | bug report | Dear Kenneth Reitz,
I use your Requests library which is quite cool. I ran into some issues like httplib uncaught exceptions
which (i think) should be handled by Requests.
## Consider the following code:
import requests
## r = requests.get('http://www.bilhetos.com')
It raises 'httplib.IncompleteRead' exception which is not handled properly in Requests.
Please consider urls below for testing:
http://www.tusseymountaintitans.com
http://www.abbottpanthers.com
http://www.spanishmoms.com
http://www.long-island-storage.com
http://www.cupertinohelpwanted.com
http://www.hoffmanestateshawks.com
http://www.brothermartincrusaders.com
http://www.1-800-printer.com
http://www.impiretickets.com
http://www.gdickinson.com
http://www.forensicsline.com
http://www.gardeningtime.com
http://www.ecollegetennis.com
http://www.milacasaints.com
http://www.bartoninsuranceagency.com
http://www.djnatural.com
http://www.containers2000.com
http://www.indiancreektimberwolves.com
http://www.athenswarriors.com
http://www.logansportcats.com
http://www.osani.com
http://www.xn--sammler-brse-djb.com
http://www.800usahealth.com
http://www.wealth-wise.com
http://www.foothillmustangs.com
http://www.manasquanbigblue.com
http://www.bilhetos.com
http://www.atlantahomesteam.com
http://www.foxcitiessatellite.com
http://www.chargersmail.com
http://www.fighterplace.com
Best regards,
Vladimir Goncharov
| null | https://github.com/psf/requests/pull/1498 | null | {'base_commit': '9473f15909fb3f2329247812e0d3c661421ceafc', 'files': [{'path': 'requests/compat.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [92, 107]}}}, {'path': 'requests/exceptions.py', 'status': 'modified', 'Loc': {"('InvalidURL', None, 54)": {'add': [55]}}}, {'path': 'requests/models.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22, 29]}, "('Response', 'generate', 547)": {'mod': [550, 551]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"requests/exceptions.py",
"requests/models.py",
"requests/compat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 414aae70b160a9eaff55c4314d339305cb33c6e9 | https://github.com/ansible/ansible/issues/41299 | networking
performance
module
support:community
bug
meraki
affects_2.7
cisco | Meraki_admin doesn’t always use org_id and net_id | ##### SUMMARY
`org_id` and `net_id` can be provided to improve playbook execution performance since less lookups are needed. `org_id` and `net_id` should be used within the module, when possible to avoid unnecessarily API calls.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
meraki_admin
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.7.0.dev0 (meraki/meraki_device 387c37e255) last updated 2018/06/06 20:11:36 (GMT -500)
config file = None
configured module search path = ['/Users/kbreit/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/kbreit/Documents/Programming/ansible/lib/ansible
executable location = /Users/kbreit/Documents/Programming/ansible/bin/ansible
python version = 3.5.4 (default, Feb 25 2018, 14:56:02) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.39.2)]
```
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).--> | null | https://github.com/ansible/ansible/pull/41518 | null | {'base_commit': '414aae70b160a9eaff55c4314d339305cb33c6e9', 'files': [{'path': 'lib/ansible/modules/network/meraki/meraki_admin.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [71, 77, 83, 89, 97, 105], 'mod': [87]}, "(None, 'get_admin_id', 174)": {'mod': [174, 179]}, "(None, 'main', 274)": {'mod': [349, 355, 360, 374]}}}, {'path': 'test/integration/targets/meraki_admin/tasks/main.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/modules/network/meraki/meraki_admin.py"
],
"doc": [],
"test": [],
"config": [
"test/integration/targets/meraki_admin/tasks/main.yml"
],
"asset": []
} | 1 |
ansible | ansible | 707458cc8cc78f5162d6ee76d01fc112499313be | https://github.com/ansible/ansible/issues/69678 | support:core
bug
has_pr
P3
affects_2.10 | constants.py: functions and constants deprecated, to be removed in 2.8 resp. 2.10 | ##### SUMMARY
lib/ansible/constants.py has its own deprecation mechanism:
https://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py#L32-L39
The following functions were supposed to be removed in 2.8:
- `mk_boolean` https://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py#L42
- `get_config` https://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py#L48
The following constant was supposed to be removed in 2.10:
- `BECOME_METHODS` https://github.com/ansible/ansible/blob/devel/lib/ansible/constants.py#L89
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/constants.py
##### ANSIBLE VERSION
```paste below
devel
```
| null | https://github.com/ansible/ansible/pull/70466 | null | {'base_commit': '707458cc8cc78f5162d6ee76d01fc112499313be', 'files': [{'path': 'lib/ansible/constants.py', 'status': 'modified', 'Loc': {"(None, 'mk_boolean', 42)": {'mod': [42, 43, 44, 45, 48, 49, 50, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 63, 65]}, '(None, None, None)': {'mod': [88, 89, 90, 91, 92, 93, 94, 95]}}}, {'path': 'test/units/test_constants.py', 'status': 'modified', 'Loc': {"('TestMkBoolean', None, 97)": {'mod': [97, 98, 99, 100, 102, 103, 105, 106, 107, 108, 110, 111, 112, 113, 114, 116, 117, 118, 119, 120, 121, 122]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/constants.py"
],
"doc": [],
"test": [
"test/units/test_constants.py"
],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 3e6c76fc2e6157487a254d42feb17c9673dd4987 | https://github.com/ansible/ansible/issues/40903 | cloud
openstack
c:inventory/contrib_script
inventory
support:core
affects_2.5
bug
traceback | OpenStack Inventory doesn't work when multiple clouds defined | <!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### SUMMARY
<!--- Explain the problem briefly -->
When more than 1 cloud is configured in `clouds.yaml`, OpenStack inventory errors
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
contrib/inventory/openstack_inventory.py
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.5.2
config file = None
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /home/ubuntu/.local/lib/python2.7/site-packages/ansible
executable location = /home/ubuntu/.local/bin/ansible
python version = 2.7.12 (default, Dec 4 2017, 14:50:18) [GCC 5.4.0 20160609]
```
Inventory version: latest from devel branch (Ansible 2.6 version)
##### CONFIGURATION
<!--- If using Ansible 2.4 or above, paste, BELOW THIS COMMENT, the results of "ansible-config dump --only-changed"
Otherwise, mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).-->
Using Ansible defaults
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
Ubuntu 16.04 64-bit
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
<!--- Paste example playbooks or commands between quotes below -->
clouds.yaml file:
```yaml
clouds:
test:
auth:
auth_url: %AUTHURL%
username: fakeusername
password: fakepassword
project_name: fakeproject
test2:
auth:
auth_url: %AUTHURL%
username: fakeusername
password: fakepassword
project_name: fakeproject
```
Then running the inventory:
```
./openstack_inventory.py --list
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I expect it to aggregate all the cloud inventory into one continuous inventory.
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
$ ./openstack_inventory.py --list
Traceback (most recent call last):
File "./openstack_inventory.py", line 265, in <module>
main()
File "./openstack_inventory.py", line 254, in main
output = get_host_groups(inventory, refresh=args.refresh, cloud=args.cloud)
File "./openstack_inventory.py", line 118, in get_host_groups
(cache_file, cache_expiration_time) = get_cache_settings(cloud)
File "./openstack_inventory.py", line 195, in get_cache_settings
config_files=cloud_config.CONFIG_FILES + CONFIG_FILES).get_one()
File "/home/ubuntu/.local/lib/python2.7/site-packages/openstack/config/loader.py", line 1096, in get_one
auth_plugin = loader.load_from_options(**config['auth'])
File "/home/ubuntu/.local/lib/python2.7/site-packages/keystoneauth1/loading/base.py", line 162, in load_from_options
raise exceptions.MissingRequiredOptions(missing_required)
keystoneauth1.exceptions.auth_plugins.MissingRequiredOptions: Auth plugin requires parameters which were not given: auth_url
```
When I remove the second `test2` cloud, the inventory works as expected | null | https://github.com/ansible/ansible/pull/41664 | null | {'base_commit': '3e6c76fc2e6157487a254d42feb17c9673dd4987', 'files': [{'path': 'contrib/inventory/openstack_inventory.py', 'status': 'modified', 'Loc': {"(None, 'get_cache_settings', 193)": {'mod': [195, 196]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"contrib/inventory/openstack_inventory.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 81308e8b22c0d49e9ed27434d15ce4b0d984136c | https://github.com/ansible/ansible/issues/34855 | cloud
module
docker
affects_2.4
support:community
feature | docker_network does not support ipv6 networks | <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest -->
- Feature Idea
##### COMPONENT NAME
docker_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes below -->
```
ansible 2.4.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/sm/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.6/dist-packages/ansible
executable location = /usr/local/bin/ansible
python version = 3.6.3 (default, Oct 3 2017, 21:45:48) [GCC 7.2.0]
```
##### SUMMARY
The docker_network module does not support defining an ipv6 network. There is no `enable_ipv6` parameter. Furthermore, a new strategy must be chosen to allow defining custom ipv4 and ipv6 options.
At the moment an ipv6 subnet could be defined with
```yml
- name: Create ipv6 network
docker_network:
name: ipv6
ipam_options:
subnet: 'a:b:c:d::/80'
```
but without setting `enable_ipv6`, containers don't get an ipv6 address. Furthermore, if the task definition does not change on further runs, the task outputs that the network changed, because it does not expect an ipv6 subnet.
To implement ipv6 network definitions, two changes are required.
First, a parameter to enable ipv6 must be introduced. Maybe with the name `enable_ipv6` which is not required and defaults to no.
Second, the `ipam_options` directive must be extended to allow multiple config entries. I would suggest a list:
```yml
- name: Create ipv6 network
docker_network:
name: ipv6
enable_ipv6: yes
ipam_options:
- subnet: '172.3.26.0/16'
gateway: 172.3.26.1
- subnet: 'a:b:c:d::/80'
```
| null | https://github.com/ansible/ansible/pull/47492 | null | {'base_commit': '81308e8b22c0d49e9ed27434d15ce4b0d984136c', 'files': [{'path': 'changelogs/fragments/35370-add_support_for_docker_network_internal_flag.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}}}, {'path': 'lib/ansible/modules/cloud/docker/docker_network.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [64, 71, 168, 204], 'mod': [107, 116, 144, 149, 150, 151, 152]}, "('TaskParameters', '__init__', 182)": {'add': [191, 195]}, "('DockerNetworkManager', '__init__', 207)": {'add': [221]}, "('DockerNetworkManager', 'create_network', 290)": {'add': [291], 'mod': [293, 294, 295, 296, 297, 300, 301, 303, 304, 307, 308, 309, 310, 311]}, "(None, 'main', 387)": {'add': [401, 403], 'mod': [396, 397, 398, 405, 406]}, "('DockerNetworkManager', 'has_different_config', 234)": {'mod': [260, 261, 263, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/modules/cloud/docker/docker_network.py"
],
"doc": [
"changelogs/fragments/35370-add_support_for_docker_network_internal_flag.yaml"
],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | f7d7890df93393b3364fe40c4d8a65c76610c4db | https://github.com/ansible/ansible/issues/81294 | module
bug
has_pr
affects_2.16 | Gathering facts fails on a remote macOS host | ### Summary
When I try to run my playbook against a macOS host, the implicit facts gathering task fails because the non-interactive shell has nothing in its PATH, and ansible is trying to call 'sysctl hw.model'
See lib/ansible/module_utils/facts/hardware/darwin.py line 71
I suggest using the full path like so: '/usr/sbin/sysctl hw.model'
### Issue Type
Bug Report
### Component Name
facts
### Ansible Version
```console
$ ansible --version
ansible [core 2.13.10]
config file = None
configured module search path = ['/Users/avivpeled/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /opt/homebrew/lib/python3.11/site-packages/ansible
ansible collection location = /Users/avivpeled/.ansible/collections:/usr/share/ansible/collections
executable location = /opt/homebrew/bin/ansible
python version = 3.11.4 (main, Jun 15 2023, 07:55:38) [Clang 14.0.3 (clang-1403.0.22.14.1)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
Running ansible on macOS ventura, target host is macOS monterey
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
---
- name: Test
hosts: macos
tasks:
- name: Print Gathered Facts
debug:
var: ansible_facts
```
### Expected Results
I expect to see the list of collected facts
### Actual Results
```console
PLAY [Test] *******************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************
[WARNING]: Module invocation had junk after the JSON data: exit status 1
fatal: [mac-mini-intel-04]: FAILED! => {"ansible_facts": {}, "changed": false, "failed_modules": {"ansible.legacy.setup": {"cmd": "sysctl hw.model", "failed": true, "invocation": {"module_args": {"fact_path": "/etc/ansible/facts.d", "filter": [], "gather_subset": ["all"], "gather_timeout": 10}}, "msg": "[Errno 2] No such file or directory: b'sysctl'", "rc": 2, "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}}, "msg": "The following modules failed to execute: ansible.legacy.setup\n"}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | null | https://github.com/ansible/ansible/pull/81297 | null | {'base_commit': 'f7d7890df93393b3364fe40c4d8a65c76610c4db', 'files': [{'path': 'lib/ansible/module_utils/basic.py', 'status': 'modified', 'Loc': {"('AnsibleModule', None, 360)": {'mod': [1351]}, "('AnsibleModule', 'get_bin_path', 1351)": {'mod': [1356, 1358, 1367, 1368]}}}, {'path': 'lib/ansible/module_utils/common/process.py', 'status': 'modified', 'Loc': {"(None, 'get_bin_path', 12)": {'add': [29, 36, 42, 47], 'mod': [15, 16, 17, 18, 21, 32, 33, 38, 39]}}}, {'path': 'lib/ansible/module_utils/facts/hardware/aix.py', 'status': 'modified', 'Loc': {"('AIXHardware', 'get_dmi_facts', 126)": {'mod': [132]}, "('AIXHardware', 'get_vgs_facts', 146)": {'mod': [163, 164]}, "('AIXHardware', 'get_mount_facts', 188)": {'mod': [197, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 219, 220, 221, 222, 223, 225]}, "('AIXHardware', 'get_device_facts', 231)": {'mod': [235, 236, 237, 239, 240, 242, 243, 244, 245, 246, 247, 248, 249, 250, 251, 252, 254, 255, 256, 257, 258]}}}, {'path': 'lib/ansible/module_utils/facts/hardware/darwin.py', 'status': 'modified', 'Loc': {"('DarwinHardware', 'get_memory_facts', 89)": {'mod': [97, 101, 102, 103, 104, 105, 106, 108, 109, 111, 112, 113, 114, 115, 116, 117, 119, 120, 121, 122, 123, 124, 126]}, "('DarwinHardware', 'get_uptime_facts', 130)": {'mod': [133]}}}, {'path': 'lib/ansible/module_utils/facts/hardware/freebsd.py', 'status': 'modified', 'Loc': {}}, {'path': 'lib/ansible/module_utils/facts/hardware/hpux.py', 'status': 'modified', 'Loc': {"('HPUXHardware', 'populate', 40)": {'add': [42]}}}, {'path': 'lib/ansible/module_utils/facts/hardware/netbsd.py', 'status': 'modified', 'Loc': {"('NetBSDHardware', 'get_uptime_facts', 162)": {'mod': [164]}}}, {'path': 'lib/ansible/module_utils/facts/hardware/openbsd.py', 'status': 'modified', 'Loc': {"('OpenBSDHardware', 'get_uptime_facts', 113)": {'mod': [115]}}}, {'path': 'lib/ansible/module_utils/facts/hardware/sunos.py', 'status': 'modified', 'Loc': {"('SunOSHardware', 'get_dmi_facts', 167)": {'mod': [175]}}}, {'path': 'lib/ansible/module_utils/facts/network/aix.py', 'status': 'modified', 'Loc': {"('AIXNetwork', 'get_default_interfaces', 31)": {'mod': [34, 36, 37, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48]}, "('AIXNetwork', 'get_interfaces_info', 53)": {'mod': [61, 62, 63]}}}, {'path': 'lib/ansible/module_utils/facts/network/fc_wwn.py', 'status': 'modified', 'Loc': {"('FcWwnInitiatorFactCollector', 'collect', 33)": {'mod': [50, 62, 63, 84, 85]}}}, {'path': 'lib/ansible/module_utils/facts/network/generic_bsd.py', 'status': 'modified', 'Loc': {"('GenericBsdIfconfigNetwork', 'populate', 35)": {'mod': [37, 42]}}}, {'path': 'lib/ansible/module_utils/facts/network/hpux.py', 'status': 'modified', 'Loc': {"('HPUXNetwork', 'populate', 30)": {'mod': [32]}, "('HPUXNetwork', 'get_default_interfaces', 47)": {'mod': [49]}, "('HPUXNetwork', 'get_interfaces_info', 60)": {'mod': [62]}}}, {'path': 'lib/ansible/module_utils/facts/network/hurd.py', 'status': 'modified', 'Loc': {"('HurdPfinetNetwork', 'populate', 63)": {'mod': [66]}}}, {'path': 'lib/ansible/module_utils/facts/network/iscsi.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}, "('IscsiInitiatorNetworkCollector', 'collect', 33)": {'mod': [83, 84, 85, 95, 96, 97, 98]}}}, {'path': 'lib/ansible/module_utils/facts/other/facter.py', 'status': 'modified', 'Loc': {"('FacterFactCollector', 'find_facter', 24)": {'mod': [25, 26]}, "('FacterFactCollector', 'collect', 58)": {'mod': [76, 77]}}}, {'path': 'lib/ansible/module_utils/facts/other/ohai.py', 'status': 'modified', 'Loc': {"('OhaiFactCollector', 'find_ohai', 38)": {'mod': [39, 40]}, "('OhaiFactCollector', None, 27)": {'mod': [42]}, "('OhaiFactCollector', 'collect', 57)": {'mod': [70, 71]}}}, {'path': 'lib/ansible/module_utils/facts/sysctl.py', 'status': 'modified', 'Loc': {"(None, 'get_sysctl', 23)": {'mod': [24, 25, 26, 30, 31, 32, 33, 34, 36, 37, 38, 39, 40, 41, 43, 44, 45, 46, 47, 48, 53, 54, 55, 56, 58, 59]}}}, {'path': 'test/units/module_utils/facts/network/test_fc_wwn.py', 'status': 'modified', 'Loc': {"(None, 'mock_get_bin_path', 92)": {'mod': [92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104]}, "(None, 'mock_run_command', 107)": {'mod': [109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119]}}}, {'path': 'test/units/module_utils/facts/network/test_generic_bsd.py', 'status': 'modified', 'Loc': {"(None, 'get_bin_path', 25)": {'mod': [25, 26, 27, 28, 29, 30]}}}, {'path': 'test/units/module_utils/facts/network/test_iscsi_get_initiator.py', 'status': 'modified', 'Loc': {"(None, 'test_get_iscsi_info', 39)": {'mod': [44, 50]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/module_utils/facts/hardware/hpux.py",
"lib/ansible/module_utils/basic.py",
"lib/ansible/module_utils/facts/hardware/sunos.py",
"lib/ansible/module_utils/facts/hardware/openbsd.py",
"lib/ansible/module_utils/facts/network/hpux.py",
"lib/ansible/module_utils/facts/other/facter.py",
"lib/ansible/module_utils/facts/network/iscsi.py",
"lib/ansible/module_utils/facts/sysctl.py",
"lib/ansible/module_utils/facts/other/ohai.py",
"lib/ansible/module_utils/common/process.py",
"lib/ansible/module_utils/facts/network/hurd.py",
"lib/ansible/module_utils/facts/hardware/freebsd.py",
"lib/ansible/module_utils/facts/network/generic_bsd.py",
"lib/ansible/module_utils/facts/hardware/darwin.py",
"lib/ansible/module_utils/facts/network/aix.py",
"lib/ansible/module_utils/facts/hardware/aix.py",
"lib/ansible/module_utils/facts/hardware/netbsd.py",
"lib/ansible/module_utils/facts/network/fc_wwn.py"
],
"doc": [],
"test": [
"test/units/module_utils/facts/network/test_generic_bsd.py",
"test/units/module_utils/facts/network/test_fc_wwn.py",
"test/units/module_utils/facts/network/test_iscsi_get_initiator.py"
],
"config": [],
"asset": []
} | 1 |
ansible | ansible | b5cffe8ced3c06c5c1542e37c382c74d5f61f3eb | https://github.com/ansible/ansible/issues/39759 | networking
module
support:network
nxos
bug
affects_2.6
cisco | nxos_snmp_user issues | <!---
Verify first that your issue/request is not already reported on GitHub.
THIS FORM WILL BE READ BY A MACHINE, COMPLETE ALL SECTIONS AS DESCRIBED.
Also test if the latest release, and devel branch are affected too.
ALWAYS add information AFTER (OUTSIDE) these html comments.
Otherwise it may end up being automatically closed by our bot. -->
##### ISSUE TYPE
<!--- Pick one below and delete the rest -->
- Bug Report
##### COMPONENT NAME
<!--- Insert, BELOW THIS COMMENT, the name of the module, plugin, task or feature.
Do not include extra details here, e.g. "vyos_command" not "the network module vyos_command" or the full path-->
nxos_snmp_user
##### ANSIBLE VERSION
<!--- Paste, BELOW THIS COMMENT, verbatim output from "ansible --version" between quotes below -->
```
ansible 2.6.0 (devel fed20b825f) last updated 2018/02/15 12:51:12 (GMT -400)
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /root/agents-ci/ansible/lib/ansible
executable location = /root/agents-ci/ansible/bin/ansible
python version = 2.7.6 (default, Oct 26 2016, 20:30:19) [GCC 4.8.4]
```
##### OS / ENVIRONMENT
<!--- Mention, BELOW THIS COMMENT, the OS you are running Ansible from, and the OS you are
managing, or say "N/A" for anything that is not platform-specific.
Also mention the specific version of what you are trying to control,
e.g. if this is a network bug the version of firmware on the network device.-->
Ansible Server : Ubuntu 14.04
Device: N7K running 7.0(3)D1(1)
##### SUMMARY
<!--- Explain the problem briefly -->
##### STEPS TO REPRODUCE
<!--- For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used. -->
There are few issues with nxos_snmp_user module
1. group is not a required parameter. When group is not specified, the platform does accept the CLI and assigns the default group (usually network-operator).
2. more than one group cannot be added properly
3. group cannot be removed after adding without removing the user itself.
4. There are also platform bugs where the 'show snmp user | json' output is not consistent across older platforms and the code fails for these old platforms.
5. dead code
Note: I will open a PR shortly to address these issues. | null | https://github.com/ansible/ansible/pull/39760 | null | {'base_commit': 'b5cffe8ced3c06c5c1542e37c382c74d5f61f3eb', 'files': [{'path': 'lib/ansible/modules/network/nxos/nxos_snmp_user.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [52, 55], 'mod': [45]}, "(None, 'get_snmp_user', 124)": {'add': [168], 'mod': [151, 152]}, "(None, 'config_snmp_user', 181)": {'add': [192], 'mod': [181, 182, 187, 189, 191]}, "(None, 'remove_snmp_user', 177)": {'mod': [177, 178]}, "(None, 'main', 214)": {'mod': [217, 254, 255, 256, 258, 263, 266, 276, 288, 289, 294, 295]}}}, {'path': 'test/integration/targets/nxos_snmp_user/tests/common/sanity.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, 21, 24, 25, 26, 27, 28, 31, 33, 35, 36, 37, 39, 40, 41]}}}]} | [] | [] | [] | {
"iss_type": "2\n4",
"iss_reason": "1\n2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/modules/network/nxos/nxos_snmp_user.py"
],
"doc": [],
"test": [],
"config": [
"test/integration/targets/nxos_snmp_user/tests/common/sanity.yaml"
],
"asset": []
} | 1 |
ansible | ansible | 44b53141748d29220441e0799b54ea3130ac6753 | https://github.com/ansible/ansible/issues/78079 | support:core
bug
has_pr
affects_2.12 | Password lookup with seed not idempotent | ### Summary
According to the [docs](https://docs.ansible.com/ansible/latest/collections/ansible/builtin/password_lookup.html#parameter-seed), providing a seed should make the password lookup idempotent, but this does not appear to be the case.
> Identical seeds will yield identical passwords.
### Issue Type
Bug Report
### Component Name
ansible.builtin.password
### Ansible Version
```console
$ ansible --version
ansible [core 2.12.6]
config file = None
configured module search path = ['/Users/mike/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/mike/Development/vagrant/.venv/lib/python3.8/site-packages/ansible
ansible collection location = /Users/mike/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/mike/Development/vagrant/.venv/bin/ansible
python version = 3.8.9 (default, Apr 13 2022, 08:48:07) [Clang 13.1.6 (clang-1316.0.21.2.5)]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
BECOME:
======
CACHE:
=====
CALLBACK:
========
CLICONF:
=======
CONNECTION:
==========
HTTPAPI:
=======
INVENTORY:
=========
LOOKUP:
======
NETCONF:
=======
SHELL:
=====
VARS:
====
```
### OS / Environment
MacOS 12.4
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```shell
for i in {0..5}; do ansible -i /dev/null localhost -m debug -a 'msg={{ lookup("ansible.builtin.password", "/dev/null", seed="foo")}}'; done
```
### Expected Results
The same password should be produced each time
### Actual Results
```console
Different password is produced each time:
localhost | SUCCESS => {
"msg": "gvlUM1Mx27449Q5ga7QG"
}
localhost | SUCCESS => {
"msg": "oyPZ8QPS-Y1aqgAccGAg"
}
localhost | SUCCESS => {
"msg": "LeYqMugFDPr4tW7UBtDu"
}
localhost | SUCCESS => {
"msg": "P.3Eaq3AUgBqvHzP3o_s"
}
localhost | SUCCESS => {
"msg": ":nFjSHte6H4Q20oGs,CC"
}
localhost | SUCCESS => {
"msg": "6l7s1:vfXjMoOePMiXh,"
}
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | null | https://github.com/ansible/ansible/pull/78080 | null | {'base_commit': '44b53141748d29220441e0799b54ea3130ac6753', 'files': [{'path': 'lib/ansible/plugins/lookup/password.py', 'status': 'modified', 'Loc': {"(None, '_parse_parameters', 142)": {'add': [147], 'mod': [142, 175, 176, 177, 178, 180]}, '(None, None, None)': {'mod': [127]}, "('LookupModule', 'run', 337)": {'mod': [341]}}}, {'path': 'test/integration/targets/lookup_password/tasks/main.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [104]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/plugins/lookup/password.py"
],
"doc": [],
"test": [],
"config": [
"test/integration/targets/lookup_password/tasks/main.yml"
],
"asset": []
} | 1 |
ansible | ansible | fc3cc73b73a39b0ab629ba76ac4f9ca65cc38eee | https://github.com/ansible/ansible/issues/21893 | affects_2.2
c:module_utils/facts
bug | Gathering facts, zero division error in get_cpu_facts | <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
module setup (ansible/module_utils/facts.py)
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.2.1.0
Python 2.7.10 (host)
Python 2.7.3 (remote)
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
I have a lot of hosts and problem only with one. I updated ansible couple of time and didn't test changes on this host.
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
```
# host
tried on MacOS and CentOS 6.8
# remote
Linux hostname 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:19:35 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
```
##### SUMMARY
<!--- Explain the problem briefly -->
When I run command:
```
ansible hostname -m setup -a 'gather_subset=!all'
```
everything work fine, but when i run playbook or just try to gather facts I have a module failure:
```
ansible hostname -m setup
hostname | FAILED! => {
"changed": false,
"failed": true,
"module_stderr": "Shared connection to hostnameIP closed.\r\n",
"module_stdout": "Traceback (most recent call last):\r\n File \"/tmp/ansible_Z9fwoS/ansible_module_setup.py\", line 134, in <module>\r\n main()\r\n File \"/tmp/ansible_Z9fwoS/ansible_module_setup.py\", line 126, in main\r\n data = get_all_facts(module)\r\n File \"/tmp/ansible_Z9fwoS/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3518, in get_all_facts\r\n File \"/tmp/ansible_Z9fwoS/ansible_modlib.zip/ansible/module_utils/facts.py\", line 3461, in ansible_facts\r\n File \"/tmp/ansible_Z9fwoS/ansible_modlib.zip/ansible/module_utils/facts.py\", line 987, in populate\r\n File \"/tmp/ansible_Z9fwoS/ansible_modlib.zip/ansible/module_utils/facts.py\", line 1132, in get_cpu_facts\r\nZeroDivisionError: integer division or modulo by zero\r\n",
"msg": "MODULE FAILURE"
}
```
I don't understand how I can fix facts.py or find problem with zero division on sources...
When I run on remote host:
```
cat /proc/cpuinfo
```
it shows info about CPU without problem.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
I can reproduce it only on my single host with other ones everything works fine.
I tried to update all packages on remote host but it didn't help me.
| null | https://github.com/ansible/ansible/pull/24428 | null | {'base_commit': 'fc3cc73b73a39b0ab629ba76ac4f9ca65cc38eee', 'files': [{'path': 'lib/ansible/module_utils/facts.py', 'status': 'modified', 'Loc': {"('LinuxHardware', 'get_cpu_facts', 1124)": {'mod': [1207]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/module_utils/facts.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 9495ddbc21da2a5c7967f01c4a958d32f203af65 | https://github.com/ansible/ansible/issues/54231 | module
support:community
feature
affects_2.8
remote_management | redfish_facts- Chassis - GetChassisThermals | <!--- Verify first that your feature was not already discussed on GitHub -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Describe the new feature/improvement briefly below -->
This feature would implement a GetChassisThermals command for the Chassis category of redfish_facts, and would retrieve temperature related properties from the Chassis/Thermal field for each sensor available.
##### ISSUE TYPE
- Feature Idea
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
redfish_facts
##### ADDITIONAL INFORMATION
<!--- Describe how the feature would be used, why it is needed and what it would solve -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
```
<!--- HINT: You can also paste gist.github.com links for larger files -->
| null | https://github.com/ansible/ansible/pull/54399 | null | {'base_commit': '9495ddbc21da2a5c7967f01c4a958d32f203af65', 'files': [{'path': 'lib/ansible/module_utils/redfish_utils.py', 'status': 'modified', 'Loc': {"('RedfishUtils', None, 21)": {'add': [895]}}}, {'path': 'lib/ansible/modules/remote_management/redfish/redfish_facts.py', 'status': 'modified', 'Loc': {"(None, 'main', 180)": {'add': [273]}, '(None, None, None)': {'mod': [165]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/modules/remote_management/redfish/redfish_facts.py",
"lib/ansible/module_utils/redfish_utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 197a360977a52a31d6ab40db1f4752454e8b93e3 | https://github.com/ansible/ansible/issues/22374 | cloud
aws
affects_2.1
module
support:certified
bug | ec2_vpc_route_table can't update routes | <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Name of the module/plugin/task/feature -->
ec2_vpc_route_table
##### ANSIBLE VERSION
```
ansible 2.1.3.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
<!---
Mention any settings you have changed/added/removed in ansible.cfg
(or using the ANSIBLE_* environment variables).
-->
##### OS / ENVIRONMENT
<!---
Mention the OS you are running Ansible from, and the OS you are
managing, or say “N/A” for anything that is not platform-specific.
-->
N/A
##### SUMMARY
<!--- Explain the problem briefly -->
Ran a script to create a NAT instance, created the routes going through the NAT using ec2_vpc_route_table
Deleted the NAT and ran the same script again.
ec2_vpc_route_table was not able to update the route with the new instance id, but left the old network interface (which no longer existed) in place, thereby resulting in a black hole.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: Create Backend route 1 and route it through NAT 1
ec2_vpc_route_table:
vpc_id: '{{ vpc_id }}'
region: '{{ vpc_region }}'
tags:
Name: "{{ vpc_name }} Backend network 1"
routes:
- dest: 0.0.0.0/0
instance_id: '{{ instance_id }}'
subnets:
- "{{ vpc_subnet['web_subnet']['subnet_one'].resource_tags.Name }}"
- "{{ vpc_subnet['db_subnet']['subnet_one'].resource_tags.Name }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
I naturally expected that the route table would be updated
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
changed: [10.77.200.10] => {"changed": true, "invocation": {"module_args": {"aws_access_key": null, "aws_secret_key": null, "ec2_url": null, "lookup": "tag", "profile": null, "propagating_vgw_ids": null, "region": "us-west-2", "route_table_id": null, "routes": [{"destination_cidr_block": "0.0.0.0/0", "instance_id": "i-1234567890123456"}], "security_token": null, "state": "present", "subnets": ["test - web - us-west-2c", "test - database - us-west-2c"], "tags": {"Name": "test Backend network 1"}, "validate_certs": true, "vpc_id": "vpc-12345678"}, "module_name": "ec2_vpc_route_table"}, "route_table": {"id": "rtb-23456789", "routes": [{"destination_cidr_block": "10.99.0.0/16", "gateway_id": null, "instance_id": "i-0987654321098765", "interface_id": "eni-12345678", "origin": "CreateRoute", "state": "active", "vpc_peering_connection_id": null}, {"destination_cidr_block": "10.77.0.0/16", "gateway_id": "local", "instance_id": null, "interface_id": null, "origin": "CreateRouteTable", "state": "active", "vpc_peering_connection_id": null}, {"destination_cidr_block": "0.0.0.0/0", "gateway_id": null, "instance_id": null, "interface_id": "eni-87654321", "origin": "CreateRoute", "state": "blackhole", "vpc_peering_connection_id": null}], "tags": {"Name": "test Backend network 1"}, "vpc_id": "vpc-12345678"}}
```
| null | https://github.com/ansible/ansible/pull/27234 | null | {'base_commit': '197a360977a52a31d6ab40db1f4752454e8b93e3', 'files': [{'path': 'lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [336]}, "(None, 'index_of_matching_route', 342)": {'add': [345]}, "(None, 'ensure_routes', 348)": {'add': [351, 355, 394], 'mod': [375]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/modules/cloud/amazon/ec2_vpc_route_table.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 8d78a829c60cc63e668683fb5d626eba942e6a39 | https://github.com/ansible/ansible/issues/33877 | support:core
affects_2.5
bug | YAML inventory: ungrouped group isn't populated | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
lib/ansible/plugins/inventory/yaml.py
##### ANSIBLE VERSION
```
ansible 2.5.0 (devel 7c187cae93) last updated 2017/12/13 16:21:51 (GMT +200)
```
##### OS / ENVIRONMENT
N/A
##### SUMMARY
When using YAML inventory, [`ungrouped` default group](http://docs.ansible.com/ansible/devel/intro_inventory.html#default-groups) is never populated.
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
`hosts.yaml`:
```yaml
all:
hosts:
testhost:
```
##### EXPECTED RESULTS
```
$ ansible-inventory -i hosts.yml --list
{
"_meta": {
"hostvars": {
"testhost": {}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {
"hosts": [
"testhost"
]
}
}
```
```
$ ansible localhost -i hosts.yml -m debug -a 'msg={{ groups }}'
localhost | SUCCESS => {
"msg": {
"all": [
"localhost"
],
"ungrouped": [
"localhost"
]
}
}
```
##### ACTUAL RESULTS
```
$ ansible-inventory -i hosts.yml --list
```
```
$ ansible-inventory -i /tmp/hosts.yml --list
{
"_meta": {
"hostvars": {
"localhost": {}
}
},
"all": {
"children": [
"ungrouped"
]
},
"ungrouped": {}
}
```
```
$ ansible localhost -i /tmp/hosts.yml -m debug -a 'msg={{ groups }}'
localhost | SUCCESS => {
"changed": false,
"msg": {
"all": [
"localhost"
],
"ungrouped": []
}
}
```
##### RESULT WITH 2.3
Using ansible 2.3 (`ansible 2.3.3.0 (stable-2.3 797d999513) last updated 2017/12/13 17:38:28 (GMT +200)`) `localhost` belongs to `ungrouped`.
```
$ ansible localhost -i hosts.yml -m debug -a 'msg={{ groups }}'
localhost | SUCCESS => {
"msg": {
"all": [
"localhost"
],
"ungrouped": [
"localhost"
]
}
}
``` | null | https://github.com/ansible/ansible/pull/33878 | null | {'base_commit': 'bf29cc79a681ea7c706fda4f95cd0d7fbd77b55a', 'files': [{'path': 'lib/ansible/inventory/data.py', 'status': 'modified', 'Loc': {"('InventoryData', 'reconcile_inventory', 105)": {'mod': [128, 129, 130, 140]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/inventory/data.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | e633b93f859daafea3cf68bb79ad140ed8a42495 | https://github.com/ansible/ansible/issues/48415 | cloud
azure
module
support:community
bug
affects_2.6
postgresql | storage_mb parameter is not working in azure_rm_postgresqlserver | ##### SUMMARY
The storage configuration to create a new database server instance is not working in azure_rm_postgresqlserver. Storage_mb is always configured with a 5Gb default value.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
azure_rm_postgresqlserver
##### ANSIBLE VERSION
```
ansible 2.6.4
config file = None
configured module search path = [u'/Users/xxx/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /Library/Python/2.7/site-packages/ansible
executable location = /usr/local/bin/ansible
python version = 2.7.10 (default, Oct 6 2017, 22:29:07) [GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)]
```
##### OS / ENVIRONMENT
ubuntu:18.04
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
I'm executing the following provision in the module:
```
TASK [Create ADP server instance] *************************************************************************************************************************************************
task path: /home/baikal/delivery/ansible/playbook_infra_create_15_dbaas_azure.yml:40
Using module file /usr/local/lib/python2.7/dist-packages/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: root
<localhost> EXEC /bin/sh -c 'AZURE_SUBSCRIPTION_ID=xxxxxxxxx python && sleep 0'
[WARNING]: Azure API profile latest does not define an entry for PostgreSQLManagementClient
changed: [localhost] => {
"changed": true,
"fully_qualified_domain_name": "adptest-db.postgres.database.azure.com",
"id": "/subscriptions/xxxxxxxxx/resourceGroups/adptest-rg/providers/Microsoft.DBforPostgreSQL/servers/adptest-db",
"invocation": {
"module_args": {
"ad_user": null,
"adfs_authority_url": null,
"admin_password": "VALUE_SPECIFIED_IN_NO_LOG_PARAMETER",
"admin_username": "postgres",
"api_profile": "latest",
"auth_source": null,
"cert_validation_mode": null,
"client_id": null,
"cloud_environment": "AzureCloud",
"create_mode": "Default",
"enforce_ssl": false,
"location": "northeurope",
"name": "adptest-db",
"password": null,
"profile": null,
"resource_group": "adptest-rg",
"secret": null,
"sku": {
"capacity": "4",
"name": "GP_Gen5_4",
"tier": "GeneralPurpose"
},
"state": "present",
"storage_mb": 307200,
"subscription_id": null,
"tenant": null,
"version": "10"
}
},
"state": "Ready",
"version": "10"
}
```
After the ansible module execution I can check the postgres server configuration and I can find the following:
```
$ az postgres server show --resource-group adptest-rg --name adptest-db
{
"administratorLogin": "postgres",
"earliestRestoreDate": "2018-11-09T11:07:05.180000+00:00",
"fullyQualifiedDomainName": "adptest-db.postgres.database.azure.com",
"id": "/subscriptions/xxxxxxxxxx/resourceGroups/adptest-rg/providers/Microsoft.DBforPostgreSQL/servers/adptest-db",
"location": "northeurope",
"name": "adptest-db",
"resourceGroup": "adptest-rg",
"sku": {
"capacity": 4,
"family": "Gen5",
"name": "GP_Gen5_4",
"size": null,
"tier": "GeneralPurpose"
},
"sslEnforcement": "Disabled",
"storageProfile": {
"backupRetentionDays": 7,
"geoRedundantBackup": "Disabled",
"storageMb": 5120
},
"tags": null,
"type": "Microsoft.DBforPostgreSQL/servers",
"userVisibleState": "Ready",
"version": "10"
}
```
Where you can see the storageMb capacity of the database server has been provisioned with 5Gb instead the value specified in storage_mb param of azure_rm_postgresqlserver for 300 Gb.
As workaround, after the database server provision I'm executing the following command:
`az postgres server update --storage-size 307200 --resource-group adptest-rg --name adptest-db`
Now if we check again the current configuration of the database instance we can see it has been correctly provisioned:
```
$ az postgres server show --resource-group adptest-rg --name adptest-db
{
"administratorLogin": "postgres",
"earliestRestoreDate": "2018-11-09T11:07:05.180000+00:00",
"fullyQualifiedDomainName": "adptest-db.postgres.database.azure.com",
"id": "/subscriptions/xxxxxxx/resourceGroups/adptest-rg/providers/Microsoft.DBforPostgreSQL/servers/adptest-db",
"location": "northeurope",
"name": "adptest-db",
"resourceGroup": "adptest-rg",
"sku": {
"capacity": 4,
"family": "Gen5",
"name": "GP_Gen5_4",
"size": null,
"tier": "GeneralPurpose"
},
"sslEnforcement": "Disabled",
"storageProfile": {
"backupRetentionDays": 7,
"geoRedundantBackup": "Disabled",
"storageMb": 307200
},
"tags": null,
"type": "Microsoft.DBforPostgreSQL/servers",
"userVisibleState": "Ready",
"version": "10"
}
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
Create the postgres database server instance with the specified storage size (storage_mb)
##### ACTUAL RESULTS
Always created an instance with 5Gb as storage size
| null | https://github.com/ansible/ansible/pull/51653 | null | {'base_commit': 'e633b93f859daafea3cf68bb79ad140ed8a42495', 'files': [{'path': 'lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py', 'status': 'modified', 'Loc': {"('AzureRMServers', 'create_update_postgresqlserver', 308)": {'add': [322]}, "('AzureRMServers', 'exec_module', 212)": {'mod': [230]}}}, {'path': 'test/integration/targets/azure_rm_postgresqlserver/aliases', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}}}, {'path': 'test/integration/targets/azure_rm_postgresqlserver/tasks/main.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [62]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/modules/cloud/azure/azure_rm_postgresqlserver.py"
],
"doc": [],
"test": [],
"config": [
"test/integration/targets/azure_rm_postgresqlserver/tasks/main.yml"
],
"asset": [
"test/integration/targets/azure_rm_postgresqlserver/aliases"
]
} | 1 |
ansible | ansible | 8e8a7c869ae219debf80456d3edac5804af22c2c | https://github.com/ansible/ansible/issues/27729 | affects_2.3
module
support:core
bug | Removed restricted key from module data: ansible_lxc_bridge | <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
Gathering Facts
##### ANSIBLE VERSION
```
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.13 (default, Jul 21 2017, 03:24:34) [GCC 7.1.1 20170630]
```
##### CONFIGURATION
No
##### OS / ENVIRONMENT
Archlinux, but probably not platform specific
##### SUMMARY
During gathering facts I get following warning
```
TASK [Gathering Facts] ************************************************************************
[WARNING]: Removed restricted key from module data: ansible_lxc_bridge = {u'macaddress':
u'70:85:c2:0b:a3:4a', u'features': {}, u'interfaces': [u'vethG18OR8', u'enp0s31f6',
u'vethWYJVBN'], u'mtu': 1500, u'active': True, u'promisc': False, u'stp': False, u'ipv4':
{u'broadcast': u'192.168.0.255', u'netmask': u'255.255.255.0', u'network': u'192.168.0.0',
u'address': u'192.168.0.110'}, u'ipv6': [{u'scope': u'link', u'prefix': u'64', u'address':
u'fe80::7285:c2ff:fe0b:a34a'}], u'device': u'lxc_bridge', u'type': u'bridge', u'id':
u'8000.7085c20ba34a'}
```
##### STEPS TO REPRODUCE
Try to run playbook against host with lxc and bridge configured.
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
No warning.
##### ACTUAL RESULTS
Warning. | null | https://github.com/ansible/ansible/pull/28401 | null | {'base_commit': '8e8a7c869ae219debf80456d3edac5804af22c2c', 'files': [{'path': 'lib/ansible/playbook/task.py', 'status': 'modified', 'Loc': {"('Task', 'preprocess_data', 158)": {'add': [211, 224], 'mod': [208, 209, 214, 215, 216, 217, 223]}, '(None, None, None)': {'mod': [28]}}}, {'path': 'lib/ansible/plugins/action/__init__.py', 'status': 'modified', 'Loc': {"('ActionBase', '_clean_returned_data', 770)": {'mod': [783]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/plugins/action/__init__.py",
"lib/ansible/playbook/task.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | a01ee2759d309f8433aefbdaf477903fe0156639 | https://github.com/ansible/ansible/issues/15988 | affects_2.0
support:core
bug | ansible -B n -P 0 does not return job_id | ##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
core
##### ANSIBLE VERSION
```
ansible 2.0.0.2
config file = /home/tg/workspace/training/ansible/content/samples/ad-hoc/ansible.cfg
configured module search path = Default w/o overrides
```
##### CONFIGURATION
```
[defaults]
host_key_checking=False
```
##### OS / ENVIRONMENT
Control machine & hosts: Ubuntu 14.04 x86_64
##### SUMMARY
When running an ad-hoc command with `-B` against the managed hosts, there does not seem to be any way to get hold of the job_id for later checking via the `async_status` module.
##### STEPS TO REPRODUCE
```
$ ansible all -i hosts -B 3600 -P 0 -a "sleep 1000"
training-1-1.tgbyte.de | SUCCESS | rc=0 >>
training-1-2.tgbyte.de | SUCCESS | rc=0 >>
training-1-3.tgbyte.de | SUCCESS | rc=0 >>
```
##### EXPECTED RESULTS
Instead of just a success message, I'd expect the response to contain some indication of the job_id that could be used for checking the status using `async_status`. http://grokbase.com/t/gg/ansible-project/14bcxt8xhc/three-questions-regarding-asynchronous-jobs hints at that this used to work before.
##### ACTUAL RESULTS
```
ansible all -i hosts -B 3600 -P 0 -vvvv -a "sleep 1000"
Using /home/tg/workspace/training/ansible/content/samples/ad-hoc/ansible.cfg as config file
Loaded callback minimal of type stdout, v2.0
<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281 )" )'
<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592 )" )'
<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de '( umask 22 && mkdir -p "$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146 )" && echo "$( echo $HOME/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146 )" )'
<training-1-1.tgbyte.de> PUT /tmp/tmpp3t6oH TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/command
<training-1-1.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-1.tgbyte.de]'
<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/command'
<training-1-1.tgbyte.de> PUT /tmp/tmp_g1yYO TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/async_wrapper
<training-1-1.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-1.tgbyte.de]'
<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/async_wrapper'
<training-1-1.tgbyte.de> PUT /tmp/tmpA8qdMB TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/arguments
<training-1-1.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-1.tgbyte.de]'
<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/async_wrapper 729788062869 3600 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/command /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/arguments'
<training-1-1.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-1.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-1.tgbyte.de 'rm -f -r /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-166413182233281/ > /dev/null 2>&1'
training-1-1.tgbyte.de | SUCCESS | rc=0 >>
<training-1-3.tgbyte.de> PUT /tmp/tmpPAuNt8 TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/command
<training-1-3.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-3.tgbyte.de]'
<training-1-2.tgbyte.de> PUT /tmp/tmpc1dxFb TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/command
<training-1-2.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-2.tgbyte.de]'
<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/command'
<training-1-3.tgbyte.de> PUT /tmp/tmpMDq4jL TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/async_wrapper
<training-1-3.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-3.tgbyte.de]'
<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/command'
<training-1-2.tgbyte.de> PUT /tmp/tmpZck5cm TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/async_wrapper
<training-1-2.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-2.tgbyte.de]'
<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/async_wrapper'
<training-1-3.tgbyte.de> PUT /tmp/tmppWLnjT TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/arguments
<training-1-3.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-3.tgbyte.de]'
<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de 'chmod a+rx /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/async_wrapper'
<training-1-2.tgbyte.de> PUT /tmp/tmpt5tfrR TO /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/arguments
<training-1-2.tgbyte.de> SSH: EXEC sftp -b - -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r '[training-1-2.tgbyte.de]'
<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/async_wrapper 80137669282 3600 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/command /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/arguments'
<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de 'LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/async_wrapper 39280840727 3600 /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/command /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/arguments'
<training-1-3.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-3.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-3.tgbyte.de 'rm -f -r /home/schulung/.ansible/tmp/ansible-tmp-1464162877.98-68166976443146/ > /dev/null 2>&1'
training-1-3.tgbyte.de | SUCCESS | rc=0 >>
<training-1-2.tgbyte.de> ESTABLISH SSH CONNECTION FOR USER: schulung
<training-1-2.tgbyte.de> SSH: EXEC ssh -C -vvv -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=schulung -o ConnectTimeout=10 -o ControlPath=/home/tg/.ansible/cp/ansible-ssh-%h-%p-%r -tt training-1-2.tgbyte.de 'rm -f -r /home/schulung/.ansible/tmp/ansible-tmp-1464162877.97-74424695929592/ > /dev/null 2>&1'
training-1-2.tgbyte.de | SUCCESS | rc=0 >>
```
| null | https://github.com/ansible/ansible/pull/59935 | null | {'base_commit': 'a01ee2759d309f8433aefbdaf477903fe0156639', 'files': [{'path': 'lib/ansible/plugins/callback/minimal.py', 'status': 'modified', 'Loc': {"('CallbackModule', 'v2_runner_on_ok', 53)": {'mod': [65]}}}, {'path': 'lib/ansible/plugins/callback/oneline.py', 'status': 'modified', 'Loc': {"('CallbackModule', 'v2_runner_on_ok', 58)": {'mod': [67]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/plugins/callback/minimal.py",
"lib/ansible/plugins/callback/oneline.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | a28709f92ddd62138f59967aa1bce319ffacf576 | https://github.com/ansible/ansible/issues/81018 | module
bug
has_pr
P3
verified
affects_2.14 | dnf module : gcc-toolset-12-binutils package does not gets updated | ### Summary
dnf module : gcc-toolset-12-binutils package does not gets updated using the ansible playbook using the dnf module.
rest all the packages gets updated . Tried and tested using the below ansible playbook.
```
- hosts: localhost
tasks:
- name: update wget
dnf:
name: httpd,gcc-toolset-12-binutils
state: latest
update_cache: yes
update_only: yes
```
### Issue Type
Bug Report
### Component Name
dnf
### Ansible Version
```console
$ ansible --version
# rpm -qa | grep ansible-core
ansible-core-2.14.2-3.el8.x86_64
# ansible --version
ansible [core 2.14.2]
config file = /etc/ansible/ansible.cfg
configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.11/site-packages/ansible
ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/bin/ansible
python version = 3.11.2 (main, Feb 17 2023, 09:28:16) [GCC 8.5.0 20210514 (Red Hat 8.5.0-18)] (/usr/bin/python3.11)
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
# cat /etc/redhat-release
Red Hat Enterprise Linux release 8.8 (Ootpa)
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
- hosts: localhost
tasks:
- name: update wget
dnf:
name: httpd,gcc-toolset-12-binutils
state: latest
update_cache: yes
update_only: yes
```
### Expected Results
both the packages httpd,gcc-toolset-12-binutils should be updated to the latest with is not the case with the gcc-toolset-12-binutils package. it does not gets updated.
### Actual Results
```console
# rpm -qa | grep gcc-toolset-12-binutils
gcc-toolset-12-binutils-gold-2.38-17.el8.x86_64
gcc-toolset-12-binutils-2.38-16.el8.x86_64
[root@rhel84 ~]# rpm -qa | grep httpd
httpd-2.4.37-56.module+el8.8.0+18758+b3a9c8da.6.x86_64
httpd-tools-2.4.37-56.module+el8.8.0+18758+b3a9c8da.6.x86_64
redhat-logos-httpd-84.5-1.el8.noarch
httpd-filesystem-2.4.37-56.module+el8.8.0+18758+b3a9c8da.6.noarch
Latest package of gcc-toolset-12-binutils is available.
~~~
# yum list gcc-toolset-12-binutils
Updating Subscription Management repositories.
Last metadata expiration check: 0:14:40 ago on Sat 10 Jun 2023 02:25:31 AM IST.
Installed Packages
gcc-toolset-12-binutils.x86_64 2.38-16.el8 @rhel-8-for-x86_64-appstream-rpms
Available Packages
gcc-toolset-12-binutils.x86_64 2.38-17.el8 rhel-8-for-x86_64-appstream-rpms
~~~
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | null | https://github.com/ansible/ansible/pull/82725 | null | {'base_commit': 'a28709f92ddd62138f59967aa1bce319ffacf576', 'files': [{'path': 'lib/ansible/modules/dnf.py', 'status': 'modified', 'Loc': {"('DnfModule', '_is_newer_version_installed', 832)": {'add': [852], 'mod': [833, 834, 835, 836, 837, 839, 840, 841, 842, 844, 845, 846, 847, 848, 850, 851]}, '(None, None, None)': {'mod': [390]}, "('DnfModule', None, 410)": {'mod': [482, 483, 484, 485, 486, 487, 488, 489, 490, 491, 492, 493, 494, 495, 497, 498, 499, 500, 502, 503, 504, 505, 506, 508, 509, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 531, 532, 534, 535, 537, 538, 540, 541, 542, 543, 544, 545, 547, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568]}, "('DnfModule', '_ensure_dnf', 570)": {'mod': [578]}, "('DnfModule', '_is_installed', 815)": {'mod': [816, 818, 819, 820, 821, 823, 824, 825, 826, 827, 828, 830]}, "('DnfModule', '_install_remote_rpms', 983)": {'mod': [1003]}}}, {'path': 'lib/ansible/modules/dnf5.py', 'status': 'modified', 'Loc': {"(None, 'is_newer_version_installed', 366)": {'mod': [377, 378, 379, 380, 382, 384, 385, 386, 387, 389]}, "('Dnf5Module', 'run', 462)": {'mod': [607, 608, 609, 610, 611, 612, 613]}}}, {'path': 'test/integration/targets/dnf/tasks/repo.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [469]}}}, {'path': 'test/integration/targets/setup_rpm_repo/library/create_repo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [51]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/modules/dnf5.py",
"lib/ansible/modules/dnf.py",
"test/integration/targets/setup_rpm_repo/library/create_repo.py"
],
"doc": [],
"test": [],
"config": [
"test/integration/targets/dnf/tasks/repo.yml"
],
"asset": []
} | 1 |
ansible | ansible | 2f75662a474b96ce377fdba15cc139d1ac25a138 | https://github.com/ansible/ansible/issues/6765 | mysql | Bug report: mysql_db does not fail when using import and bz2 or gz | ##### Issue Type:
Bug Report
##### Ansible Version:
ansible 1.6
Bug was introduced https://github.com/ansible/ansible/pull/4307
##### Environment:
N/A applies to all
##### Summary:
When using state=import, and the target= ends with .bz2 or .gz, it will succeed even when the bunzip2 or gunzip command fails. If the target does not exist, it succeeds. If the target exists but is not actually a zipped up file, it still succeeds. The module should fail if the bunzip2 or gunzip commands fail.
##### Steps To Reproduce:
ansible -i hosts realhostname -m mysql_db -a "name=test target=/backup/test.sql.gz state=import"
and the target does not exist, or is not really zipped up.
##### Expected Results:
I expect the module to return back fail with the stderr of the bunzip2 or gunzip command.
##### Actual Results:
It returns ok as in the entire thing succeeded (when indeed it did not)
| null | https://github.com/ansible/ansible/pull/6766 | null | {'base_commit': '2f75662a474b96ce377fdba15cc139d1ac25a138', 'files': [{'path': 'library/database/mysql_db', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [151, 153]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"library/database/mysql_db"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 2b723c6130f7d7887ba13cf5623bd49c39150bbf | https://github.com/ansible/ansible/issues/10840 | cloud
aws
affects_2.0
affects_2.3
c:inventory/contrib_script
docs | EC2 inventory script (ec2.py) needs better error messages & guidance | Tried running ec2.py/ec2.ini "out of the box" with all the proper boto configuration in place.
Got error "Forbidden" and nothing else - obviously not helpful in tracking down the issue.
After I hacked the script and added some additional error printing to the script I got:
```
<Code>OptInRequired</Code>
<Message>The AWS Access Key Id needs a subscription for the service</Message>
```
Still it wasn't clear what the problem was and where to go to fix it.
Eventually I guessed lucky and set rds = False in ec2.ini and this worked.
Suggestions:
- rds should be defaulted to 'False' especially since script fails cryptically for users not signed up to rds
- Error message should indicate which part of the script failed (rds, ec2, etc)
- Error message should ideally suggest a solution (i.e. set rds = False if you're not signed up to rds)
- Script should provide fuller error message not just "Forbidden"
| null | https://github.com/ansible/ansible/pull/11006 | null | {'base_commit': '2b723c6130f7d7887ba13cf5623bd49c39150bbf', 'files': [{'path': 'contrib/inventory/ec2.py', 'status': 'modified', 'Loc': {"('Ec2Inventory', 'fail_with_error', 517)": {'add': [518]}, "('Ec2Inventory', 'get_instances_by_region', 386)": {'mod': [409]}, "('Ec2Inventory', 'get_rds_instances_by_region', 411)": {'mod': [428]}, "('Ec2Inventory', 'get_elasticache_clusters_by_region', 430)": {'mod': [451, 461]}, "('Ec2Inventory', 'get_elasticache_replication_groups_by_region', 466)": {'mod': [485, 495]}, "('Ec2Inventory', None, 137)": {'mod': [517]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"contrib/inventory/ec2.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | ff5253fa0efacf5192b6d0f8b41b27a3033d7897 | https://github.com/ansible/ansible/issues/65815 | cloud
python3
module
docker
support:community
bug
has_pr
affects_2.9 | docker_network with multiple subnets always changes | <!--- Verify first that your issue is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below -->
When using `docker_network` to create a network with multiple subnets, the task will delete/create the network even if it already exists with the correct subnets. Ansible fails to judge if the existing subnets are correct, probably because of the way the arrays of subnets are compared in python.
##### ISSUE TYPE
- Bug Report
##### COMPONENT NAME
<!--- Write the short name of the module, plugin, task or feature below, use your best guess if unsure -->
docker_network
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
ansible 2.9.2
config file = /etc/ansible/ansible.cfg
configured module search path = ['/home/gunix/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python3.8/site-packages/ansible
executable location = /usr/bin/ansible
python version = 3.8.0 (default, Oct 23 2019, 18:51:26) [GCC 9.2.0]
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
ANSIBLE_PIPELINING(/etc/ansible/ansible.cfg) = True
DEFAULT_LOG_PATH(/etc/ansible/ansible.cfg) = /var/log/ansible/ansible.log
HOST_KEY_CHECKING(/etc/ansible/ansible.cfg) = True
INTERPRETER_PYTHON(/etc/ansible/ansible.cfg) = /usr/bin/python3
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. target OS versions, network device firmware, etc. -->
Both systems are running ArchLinux.
##### STEPS TO REPRODUCE
<!--- Describe exactly how to reproduce the problem, using a minimal test-case -->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- name: "deploy network namespace that can hold all IPs"
docker_network:
name: "macvlan1"
driver: "macvlan"
internal: false
driver_options:
parent: "{{ ansible_default_ipv4.alias }}"
ipam_config: "{{ macvlan_subnets }}"
```
also vars:
```
macvlan_subnets:
- gateway: 10.162.208.1
subnet: 10.162.208.0/24
- gateway: 10.162.223.1
subnet: 10.162.223.0/24
- gateway: 10.162.210.1
subnet: 10.162.210.0/24
```
<!--- HINT: You can paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- Describe what you expected to happen when running the steps above -->
I was expecting to run the play 10 times and get Changed only on the first run and OK on the other 9 runs.
##### ACTUAL RESULTS
<!--- Describe what actually happened. If possible run with extra verbosity (-vvvv) -->
The docker network ALWAYS changes, even if the subnets are correct on the server, causing all docker containers on the network to disconnect. This will cause downtime for all the services that run on the node.
<!--- Paste verbatim command output between quotes -->
```paste below
TASK [gen4 : deploy network namespace that can hold all IPs] ****************************************************************
--- before
+++ after
@@ -1,19 +1,19 @@
{
- "connected.10.162.208.129": false,
- "connected.10.162.210.161": false,
- "connected.10.162.210.169": false,
- "connected.10.162.210.170": false,
- "connected.10.162.210.171": false,
- "connected.10.162.210.172": false,
- "connected.10.162.210.173": false,
- "connected.10.162.223.72": false,
- "connected.10.162.223.73": false,
- "connected.10.162.223.74": false,
- "connected.10.162.223.75": false,
- "connected.10.162.223.76": false,
+ "connected.10.162.208.129": true,
+ "connected.10.162.210.161": true,
+ "connected.10.162.210.169": true,
+ "connected.10.162.210.170": true,
+ "connected.10.162.210.171": true,
+ "connected.10.162.210.172": true,
+ "connected.10.162.210.173": true,
+ "connected.10.162.223.72": true,
+ "connected.10.162.223.73": true,
+ "connected.10.162.223.74": true,
+ "connected.10.162.223.75": true,
+ "connected.10.162.223.76": true,
"exists": true,
- "ipam_config[0].gateway": "10.162.210.1",
- "ipam_config[0].subnet": "10.162.210.0/24",
- "ipam_config[1].gateway": "10.162.210.1",
- "ipam_config[1].subnet": "10.162.210.0/24"
+ "ipam_config[0].gateway": "10.162.208.1",
+ "ipam_config[0].subnet": "10.162.208.0/24",
+ "ipam_config[1].gateway": "10.162.223.1",
+ "ipam_config[1].subnet": "10.162.223.0/24"
}
changed: [server1337.gun1x]
```
| null | https://github.com/ansible/ansible/pull/65839 | null | {'base_commit': 'ff5253fa0efacf5192b6d0f8b41b27a3033d7897', 'files': [{'path': 'lib/ansible/modules/cloud/docker/docker_network.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [367]}, "('DockerNetworkManager', '__init__', 370)": {'add': [390]}, "('DockerNetworkManager', 'has_different_config', 408)": {'add': [451], 'mod': [454, 455, 456, 457, 458, 459, 460, 467, 468, 469, 470, 471, 472, 475]}, "(None, 'get_ip_version', 338)": {'mod': [338, 339]}, "(None, 'normalize_ipam_config_key', 354)": {'mod': [355]}}}, {'path': 'test/integration/targets/docker_network/tasks/tests/ipam.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14, 101, 172, 233, 282]}}}, {'path': 'test/units/modules/cloud/docker/test_docker_network.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8]}, "(None, 'test_get_ip_version_positives', 18)": {'mod': [18, 19]}, "(None, 'test_get_ip_version_negatives', 28)": {'mod': [28, 30]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/modules/cloud/docker/docker_network.py"
],
"doc": [
"test/integration/targets/docker_network/tasks/tests/ipam.yml"
],
"test": [
"test/units/modules/cloud/docker/test_docker_network.py"
],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 9de4f24d7ac3a205cdc723402f78d03a1fc961f8 | https://github.com/ansible/ansible/issues/75675 | support:core
docs
docsite
affects_2.12
docs_only
hackathon | Docs: Use code-block elements to format code examples: Community Guide | ### Summary
**Problem**:
Throughout the Ansible docs, there are instances where example code is preceded with a lead-in sentence ending in `::`.
**Solution:**
Enclose code in a `.. code-block:: <lexer>` element, so that translation processes know to skip this content.
For a list of allowed values for _`<lexer>`_ , refer to [Syntax highlighting - Pygments](https://docs.ansible.com/ansible/latest/dev_guide/style_guide/index.html#syntax-highlighting-pygments).
**Scope:**
In the Community Guide, there is 1 instance of a lead-in sentence ending with `::`. Use the following `grep` command to identify the files and line numbers:
```
$ grep -rn --include "*.rst" "^[[:blank:]]*[^[:blank:]\.\.].*::$" . `
```
**Example:**
Before:
```
* If the file has a unique title, use that for the main page anchor::
.. _unique_page::
```
After:
```
* If the file has a unique title, use that for the main page anchor.
.. code-block:: rst
.. _unique_page::
```
### Issue Type
Documentation Report
### Component Name
docs/docsite/rst/dev_guide
### Ansible Version
```console
n/a
```
### Configuration
```console
n/a
```
### OS / Environment
n/a
### Additional Information
When example code is enclosed within a `code-block` element, translation programs do not attempt to translate the code.
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | null | https://github.com/ansible/ansible/pull/75847 | null | {'base_commit': '9de4f24d7ac3a205cdc723402f78d03a1fc961f8', 'files': [{'path': 'docs/docsite/rst/community/communication.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [74]}}}, {'path': 'docs/docsite/rst/community/development_process.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [316, 323, 331]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/docsite/rst/community/communication.rst",
"docs/docsite/rst/community/development_process.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | ea1639e633fffac8a9db4b8b00ff8aaa4a23dadb | https://github.com/ansible/ansible/issues/52316 | windows
support:core
docs
affects_2.8 | Windows FAQ should mention possible SSL protocol issue | <!--- Verify first that your improvement is not already reported on GitHub -->
<!--- Also test if the latest release and devel branch are affected too -->
<!--- Complete *all* sections as described, this form is processed automatically -->
##### SUMMARY
<!--- Explain the problem briefly below, add suggestions to wording or structure -->
TLS 1.0 is by default the maximum TLS supported version on Windows 7. However, Linux distributions (at least Debian) begin to disable it to allow TLS 1.2 as a minimum. Thus by default connection fails with this message:
`ntlm: HTTPSConnectionPool(host='my-host', port=5986): Max retries exceeded with url: /wsman (Caused by SSLError(SSLError(1, '[SSL: UNSUPPORTED_PROTOCOL] unsupported protocol (_ssl.c:1056)')))
`
Could you explain this issue on https://docs.ansible.com/ansible/latest/user_guide/windows_faq.html and add the possible workarounds (enable TLS 1.2 on Windows 7 target / temporary re-enable TLS 1.0 on controller) that are well described on the original discussion on https://groups.google.com/forum/#!msg/ansible-project/CCjQTWSAt4I/mHsdpJGUAwAJ ?
<!--- HINT: Did you know the documentation has an "Edit on GitHub" link on every page ? -->
##### ISSUE TYPE
- Documentation Report
##### COMPONENT NAME
<!--- Write the short name of the rst file, module, plugin, task or feature below, use your best guess if unsure -->
windows_faq.rst
##### ANSIBLE VERSION
<!--- Paste verbatim output from "ansible --version" between quotes -->
```paste below
```
##### CONFIGURATION
<!--- Paste verbatim output from "ansible-config dump --only-changed" between quotes -->
```paste below
```
##### OS / ENVIRONMENT
<!--- Provide all relevant information below, e.g. OS version, browser, etc. -->
Debian testing with openssl 1.1.1a-1.
##### ADDITIONAL INFORMATION
<!--- Describe how this improves the documentation, e.g. before/after situation or screenshots -->
Windows 7 is probably still a common target, and Debian Buster (next stable probably available in the summer) will probably be a common controller, so this issue should be briefly explained in the documentation.
Regards,
Yvan
<!--- HINT: You can paste gist.github.com links for larger files -->
| null | https://github.com/ansible/ansible/pull/54016 | null | {'base_commit': 'ea1639e633fffac8a9db4b8b00ff8aaa4a23dadb', 'files': [{'path': 'docs/docsite/rst/user_guide/windows_winrm.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [751], 'mod': [505, 506, 507, 509, 510, 512, 514, 515, 516, 517, 519, 520, 521, 522]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/docsite/rst/user_guide/windows_winrm.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | a6d4c3ff7cf43c24be6622102cee834fc5096496 | https://github.com/ansible/ansible/issues/78600 | easyfix
support:core
has_pr
docs
affects_2.13 | scp_if_ssh not working as intended with OpenSSH since version 9.0 | ### Summary
The option `scp_if_ssh = true` is used to force Ansible to use scp instead of sftp on targets, that don't support sftp. However since OpenSSH 9.0 (8.8 on Arch Linux it seems) even the scp utility defaults to using sftp. The old behavior can be enabled by additionally setting `scp_extra_args = "-O"` to force scp to use the old protocol.
I recognize that this is not an Ansible bug, but it may break documented and expected behavior.
OpenSSH Changelog: https://www.openssh.com/txt/release-9.0
> This release switches scp(1) from using the legacy scp/rcp protocol to using the SFTP protocol by default.
### Issue Type
~Bug Report~
Documentation Report
### Component Name
connection, ssh, scp
### Ansible Version
```console
ansible [core 2.13.2]
config file = None
configured module search path = ['/home/ansible/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /usr/local/lib/python3.9/dist-packages/ansible
ansible collection location = /home/ansible/.ansible/collections:/usr/share/ansible/collections
executable location = /usr/local/bin/ansible
python version = 3.9.2 (default, Feb 28 2021, 17:03:44) [GCC 10.2.1 20210110]
jinja version = 3.1.2
libyaml = True
```
### Configuration
```console
CONNECTION:
==========
ssh:
___
scp_extra_args(env: ANSIBLE_SCP_EXTRA_ARGS) = -O
scp_if_ssh(env: ANSIBLE_SCP_IF_SSH) = true
```
### OS / Environment
Debian Sid
### Steps to Reproduce
configure sshd to not offer sftp. (eg. delete `Subsystem sftp /usr/lib/ssh/sftp-server` from `/etc/ssh/sshd_config` and restart)
create a small example playbook, contents are irrelevant
<!--- Paste example playbooks or commands between quotes below -->
```yaml
- hosts: localhost
gather_facts: true
remote_user: root
tasks:
- name: install a nonexistant package
package:
name:
- less-is-more
```
execute wit Ansible configuration or environment setting to use scp:
```
export ANSIBLE_SCP_IF_SSH=false
ansible-playbook -c ssh playbook.yml
```
### Expected Results
```
ansible@instance:~$ ansible-playbook -c ssh playbook.yml
PLAY [localhost] ***************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]
TASK [install a nonexistant package] *******************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "No package matching 'less-is-more' is available"}
PLAY RECAP *********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
### Actual Results
```console
with only `scp_if_ssh`:
ansible@instance:~$ ansible-playbook -c ssh playbook.yml
PLAY [localhost] ***************************************************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************************************************
fatal: [localhost]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via scp: scp: Connection closed\r\n", "unreachable": true}
PLAY RECAP *********************************************************************************************************************************************
localhost : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
```
with additional setting to acc `-O`to scp (working correctly):
```
ansible@instance:~$ export ANSIBLE_SCP_EXTRA_ARGS="-O"
ansible@instance:~$ ansible-playbook -c ssh playbook.yml
PLAY [localhost] ***************************************************************************************************
TASK [Gathering Facts] *********************************************************************************************
ok: [localhost]
TASK [install a nonexistant package] *******************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "No package matching 'less-is-more' is available"}
PLAY RECAP *********************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
```
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | null | https://github.com/ansible/ansible/pull/78745 | null | {'base_commit': 'a6d4c3ff7cf43c24be6622102cee834fc5096496', 'files': [{'path': 'lib/ansible/plugins/connection/ssh.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [294, 312]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/plugins/connection/ssh.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ansible | ansible | 4c5a6d9d44f81d88cca2a9f13966af326bed4b64 | https://github.com/ansible/ansible/issues/23078 | affects_2.4
support:core
bug | Jinja filters output trailing whitespace breaking idempotency | <!---
Verify first that your issue/request is not already reported on GitHub.
Also test if the latest release, and master branch are affected too.
-->
##### ISSUE TYPE
<!--- Pick one below and delete the rest: -->
- Bug Report
##### COMPONENT NAME
`lib/ansible/parsing/yaml/dumper.py:AnsibleDumper`
##### ANSIBLE VERSION
<!--- Paste verbatim output from “ansible --version” between quotes below -->
```
ansible 2.4.0 (devel 6c101087ac) last updated 2017/03/29 16:09:54 (GMT +200)
config file =
configured module search path = [u'/home/user/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
python version = 2.7.9 (default, Jun 29 2016, 13:08:31) [GCC 4.9.2]
[and]
ansible 2.2.2.0 (stable-2.2 2273800f7c) last updated 2017/03/29 10:54:33 (GMT +200)
lib/ansible/modules/core: (detached HEAD 31a1f19cd8) last updated 2017/03/29 16:07:02 (GMT +200)
lib/ansible/modules/extras: (detached HEAD 921bc0d464) last updated 2017/03/29 14:42:48 (GMT +200)
```
##### CONFIGURATION
Ansible default. No changes.
##### OS / ENVIRONMENT
Isolated Debian Jessie VM on Qubes OS setup only for testing with Ansible devel.
##### SUMMARY
The `to_nice_json` filter and others like `indent` output trailing whitespace. That is not in itself a problem (although bad style). But in the case of `to_nice_json` it becomes a problem because it is potentially not idempotent which breaks CI which test for this property (e.g. DebOps).
Also note that the task itself outputs trailing whitespace (select the task output below).
##### STEPS TO REPRODUCE
<!---
For bugs, show exactly how to reproduce the problem, using a minimal test-case.
For new features, show how the feature would be used.
-->
<!--- Paste example playbooks or commands between quotes below -->
```yaml
---
- hosts: localhost
vars:
input:
- test: True
test2:
- 23
- test: True
tasks:
- name: Jinja2 templating outputting trailing spaces which change depending
debug:
msg: "{{ (input | to_nice_json).split('\n') }}"
# Workaround is part of https://github.com/debops/debops-playbooks/blob/master/templates/debops__tpl_macros.j2
- name: Clean Jinja2 templating using workaround
debug:
msg: "{{ (input | to_nice_json | regex_replace(\"[ \\t\\r\\f\\v]+(\\n|$)\", \"\\1\")).split('\n') }}"
```
<!--- You can also paste gist.github.com links for larger files -->
##### EXPECTED RESULTS
<!--- What did you expect to happen when running the steps above? -->
```
TASK [Clean Jinja2 templating using workaround]
ok: [localhost] => {
"changed": false,
"msg": [
"[",
" {",
" \"test\": true,",
" \"test2\": [",
" 23",
" ]",
" },",
" {",
" \"test\": true",
" }",
"]"
]
}
```
Example role in CI: https://travis-ci.org/debops/ansible-apt/builds/216374820
##### ACTUAL RESULTS
<!--- What actually happened? If possible run with extra verbosity (-vvvv) -->
<!--- Paste verbatim command output between quotes below -->
```
TASK [Jinja2 template outputting trailing spaces which change depending on next element]
ok: [localhost] => {
"changed": false,
"msg": [
"[",
" {",
" \"test\": true, ",
" \"test2\": [",
" 23",
" ]",
" }, ",
" {",
" \"test\": true",
" }",
"]"
]
}
```
Example role in CI: https://travis-ci.org/debops/ansible-apt/builds/216355310#L824-L825
| null | https://github.com/ansible/ansible/pull/42633 | null | {'base_commit': '4c5a6d9d44f81d88cca2a9f13966af326bed4b64', 'files': [{'path': 'lib/ansible/plugins/filter/core.py', 'status': 'modified', 'Loc': {"(None, 'to_nice_json', 87)": {'mod': [90]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/ansible/plugins/filter/core.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | d5ca8ca34e6a63978f368e733c11fad0b6619096 | https://github.com/ultralytics/yolov5/issues/2405 | bug | Can't train in DDP mode after recent update | ## 🐛 Bug
When I pull the latest code, I found that DDP training would get stuck in the first few epochs.
I ran some tests to see which commit caused this bug and I found commit `a3ecf0fd640465f9a7c009e81bcc5ecabf381004` on Mar 3 worked well.
But when I `git checkout` commit `e931b9da33f45551928059b8d61bddd50e401e48` on Mar 4, the bug appeared.
And the bug still exists in the latest commit.
## To Reproduce (REQUIRED)
`python3 -m torch.distributed.launch --nproc_per_node 4 train.py`
The training process would get stuck forever unless you terminate it manually.
And it still occupied the GPU memory unless killing the process by `kill -9 xxxxx`

## Expected behavior
Roll back to the older code, and get the expected behavior.
```bash
$ git checkout a3ecf0fd640465f9a7c009e81bcc5ecabf381004
$ python3 -m torch.distributed.launch --nproc_per_node 4 train.py
```

## Environment
If applicable, add screenshots to help explain your problem.
- OS: Ubuntu 20.04
- GPU: 1080 Ti * 4
- Python: 3.8
- pytorch: 1.7.1
- CUDA: 11.1
- Driver: 455.32
## Additional
It seems like the latest commit working fine on 2 * 3090, I'm not sure yet, I will do some further tests on 3090 or other GPU. | null | https://github.com/ultralytics/yolov5/pull/2421 | null | {'base_commit': 'd5ca8ca34e6a63978f368e733c11fad0b6619096', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 40)": {'mod': [184, 185, 186, 217]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | d95978a562bec74eed1d42e370235937ab4e1d7a | https://github.com/ultralytics/yolov5/issues/6153 | enhancement | Enable AdamW Optimizer | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
### Description
When we use Adam, we have to tune learning rate along with the batch size.
It is cumbersome; with [AdamW](https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html), we don't have to re-tune learning rate even if we change batch size.
So, it is nice to be able to use this option.
I have created PR to enable AdamW optimizer. Please check it out.
#6152
### Use case
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | null | https://github.com/ultralytics/yolov5/pull/6152 | null | {'base_commit': 'd95978a562bec74eed1d42e370235937ab4e1d7a', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 58)": {'add': [159], 'mod': [158]}, '(None, None, None)': {'mod': [25]}, "(None, 'parse_opt', 442)": {'mod': [463]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"train.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | b2bef8f6d8e4c008bae72c211a186d75732fc213 | https://github.com/ultralytics/yolov5/issues/1639 | enhancement | Promote a new activation function recently developed by Kuangshi technology!!!! | ## 🚀 Feature
ReLU and PReLU are extended to 2D activation functions by adding negligible space condition overhead.
## Motivation
Can a visual task specific activation function be designed?
## Pitch
I would like to suggest a branch, but because the work is too busy, directly paste the code. It can be used directly.
## Alternatives
None.
## Additional context
```python3
import torch
import torch.nn as nn
import torch.nn.functional as F
class FReLU(nn.Module):
r""" Applies the FReLU function element-wise.
`"Funnel Activation for Visual Recognition" <https://arxiv.org/pdf/2007.11824.pdf>`_
Examples:
>>> channels = 64
>>> frelu = FReLU(channels)
>>> input = torch.randn(1, channels, 64, 64)
>>> output = frelu(input)
"""
def __init__(self, channels):
super().__init__()
self.FReLU = nn.Sequential(
nn.Conv2d(channels, channels, kernel_size=3, stride=1, padding=1, groups=channels, bias=False),
nn.BatchNorm2d(channels)
)
def forward(self, input: Tensor):
out = self.FReLU(input)
return torch.max(input, out)
```
Thank you very much for your long-term promotion of Yolo technology. I will submit some code after a while. Good luck to you! | null | https://github.com/ultralytics/yolov5/pull/1666 | null | {'base_commit': 'b2bef8f6d8e4c008bae72c211a186d75732fc213', 'files': [{'path': 'utils/activations.py', 'status': 'modified', 'Loc': {"('FReLU', '__init__', 66)": {'mod': [68]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"utils/activations.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | d223460f3a4b4151437b15ac83990cea4b0f42e2 | https://github.com/ultralytics/yolov5/issues/11170 | bug | Class filtering does not work in segmentation code | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
Training
### Bug
I tried to filter classes that I train with as explained [here](https://github.com/ultralytics/yolov5/issues/1978). I found out that it works with `train.py` but not with `segment/train.py`.
I expect that if I change the following line:
```python
include_class = [1] # filter labels to include only these classes (optional)
```
in `utils/dataloaders.py` line `533`, then in `train.py` and `segment/train.py`
- the code does not crash
- the code trains with only class `1` (if such class exists in the `.yaml` file)
What I get:
- `train.py` -> works as expected
- `segment/train.py` -> crashes:
```
Traceback (most recent call last):
File "segment/train.py", line 664, in <module>
main(opt)
File "segment/train.py", line 555, in main
train(opt.hyp, opt, device, callbacks)
File "segment/train.py", line 180, in train
train_loader, dataset = create_dataloader(
File "yolov5/utils/segment/dataloaders.py", line 46, in create_dataloader
dataset = LoadImagesAndLabelsAndMasks(
File "yolov5/utils/segment/dataloaders.py", line 102, in __init__
super().__init__(path, img_size, batch_size, augment, hyp, rect, image_weights, cache_images, single_cls,
File "yolov5/utils/dataloaders.py", line 540, in __init__
self.segments[i] = segment[j]
TypeError: only integer scalar arrays can be converted to a scalar index
```
### Environment
- YOLO: yolov5 `3e55763d45f9c5f8217e4dad5ba1e6c1f42e3bf8`
- OS: Ubuntu 20.04
- Python 3.8
### Minimal Reproducible Example
- clone yolov5 repo
- install dependencies with pip
- edit the lines as explained in the `Bug` section
```
python3 segment/train.py
```
### Additional
There will be a PR showing the fix for this
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | null | https://github.com/ultralytics/yolov5/pull/11171 | null | {'base_commit': 'd223460f3a4b4151437b15ac83990cea4b0f42e2', 'files': [{'path': 'utils/dataloaders.py', 'status': 'modified', 'Loc': {"('LoadImagesAndLabels', '__init__', 439)": {'add': [533], 'mod': [540]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"utils/dataloaders.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | 2da6444c9251f77cfd3e410369cd067245d961b5 | https://github.com/ultralytics/yolov5/issues/916 | question
Stale | premature end of JPEG images | ## ❔Question
`Epoch gpu_mem GIoU obj cls total targets img_size
1/99 2.87G 0.05456 0.04197 0 0.09652 10 640: 100% 157/157 [00:52<00:00, 2.98it/s]
Class Images Targets P R mAP@.5 mAP@.5:.95: 0% 0/157 [00:00<?, ?it/s]Premature end of JPEG file
Class Images Targets P R mAP@.5 mAP@.5:.95: 100% 157/157 [00:19<00:00, 8.21it/s]
all 2.5e+03 1e+04 0.362 0.777 0.684 0.338`
It shows premature end of JPEG images during validation, what leads to this?
## Additional context
| null | https://github.com/ultralytics/yolov5/pull/4548 | null | {'base_commit': '2da6444c9251f77cfd3e410369cd067245d961b5', 'files': [{'path': 'utils/datasets.py', 'status': 'modified', 'Loc': {"('LoadStreams', '__init__', 280)": {'mod': [317]}, "('LoadImagesAndLabels', '__getitem__', 529)": {'mod': [571]}, "(None, 'verify_image_label', 861)": {'mod': [864, 875, 878, 899]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"utils/datasets.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | 5afc9c25ef0874dff0c18267947ea4e8b03c90f4 | https://github.com/ultralytics/yolov5/issues/5040 | bug | Error caused by emoji in comments in yolov5/data/hyps/*.yaml file | Before submitting a bug report, please be aware that your issue **must be reproducible** with all of the following,
otherwise it is non-actionable, and we can not help you:
- **Current repo**: run `git fetch && git status -uno` to check and `git pull` to update repo
- **Common dataset**: coco.yaml or coco128.yaml
- **Common environment**: Colab, Google Cloud, or Docker image. See https://github.com/ultralytics/yolov5#environments
If this is a custom dataset/training question you **must include** your `train*.jpg`, `val*.jpg` and `results.png`
figures, or we can not help you. You can generate these with `utils.plot_results()`.
## 🐛 Bug
Decode error occurs when executing the command suggested for input after git clone in Windows environment
## To Reproduce (REQUIRED)
Input:
```
(env38) PS C:\Users\Username\PycharmProjects\yolov5> python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt
```
Output:
```
Downloading https://ultralytics.com/assets/Arial.ttf to C:\Users\Username\AppData\Roaming\Ultralytics\Arial.ttf...
train: weights=yolov5s.pt, cfg=, data=coco128.yaml, hyp=data\hyps\hyp.scratch.yaml, epochs=3, batch_size=16, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=8, entity=None, project=runs\train, name=exp, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=-1, freeze=0, patience=100
github: up to date with https://github.com/ultralytics/yolov5
YOLOv5 v5.0-493-g1922dde torch 1.8.0+cu111 CUDA:0 (GeForce RTX 3090, 24576.0MB)
Traceback (most recent call last):
File "train.py", line 615, in <module>
main(opt)
File "train.py", line 512, in main
train(opt.hyp, opt, device, callbacks)
File "train.py", line 76, in train
hyp = yaml.safe_load(f) # load hyps dict
File "C:\Users\Username\miniconda3\envs\env38\lib\site-packages\yaml\__init__.py", line 162, in safe_load
return load(stream, SafeLoader)
File "C:\Users\Username\miniconda3\envs\env38\lib\site-packages\yaml\__init__.py", line 112, in load
loader = Loader(stream)
File "C:\Users\Username\miniconda3\envs\env38\lib\site-packages\yaml\loader.py", line 34, in __init__
Reader.__init__(self, stream)
File "C:\Users\Username\miniconda3\envs\env38\lib\site-packages\yaml\reader.py", line 85, in __init__
self.determine_encoding()
File "C:\Users\Username\miniconda3\envs\env38\lib\site-packages\yaml\reader.py", line 124, in determine_encoding
self.update_raw()
File "C:\Users\Username\miniconda3\envs\env38\lib\site-packages\yaml\reader.py", line 178, in update_raw
data = self.stream.read(size)
UnicodeDecodeError: 'cp949' codec can't decode byte 0xf0 in position 9: illegal multibyte sequence
```
## Expected behavior
A clear and concise description of what you expected to happen.
## Environment
If applicable, add screenshots to help explain your problem.
- OS: Windows 10
- GPU : RTX3090
## Additional context
This error occurs because of the rocket-shaped emoji (🚀) in the yolov5/data/hyps/*.yaml file. You can fix the error by editing the yaml file or specifying the decoding method in detail.
| null | https://github.com/ultralytics/yolov5/pull/5060 | null | {'base_commit': '5afc9c25ef0874dff0c18267947ea4e8b03c90f4', 'files': [{'path': 'models/yolo.py', 'status': 'modified', 'Loc': {"('Model', '__init__', 83)": {'mod': [90]}}}, {'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 59)": {'mod': [75]}, "(None, 'main', 479)": {'mod': [491, 555]}}}, {'path': 'utils/aws/resume.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [24]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"train.py",
"utils/aws/resume.py",
"models/yolo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | 4e65052f28b1184b9d463c1e44b3a79b95113904 | https://github.com/ultralytics/yolov5/issues/4409 | bug | Training failed with 4 GPUs after first epoch |
## 🐛 Bug
I was able to train on OVH AI Cloud with 4 classes and 500 images in total three days ago with 4 GPUs but when I try again to train with my full dataset this time (around 9000 images for 4 classes), the training stops after the first epoch, when the validation step is about to finish.
I tried to change different things: get rid of the cache argument, change for a smaller model (I was using 5MP6 at first), changing batch size, changing the number of GPUs, still the same.
## To Reproduce (REQUIRED)
First, here is my Dockerfile. It is based on the Official Yolov5 docker image with W&B integrated:
```dockerfile
FROM ultralytics/yolov5:latest
# unfortunately, wandb is commented out in the official image
RUN pip3 install wandb
# pass the wandb API key at build time
ARG wandb_key
ENV wandb_api_key=$wandb_key
# setup wandb account
RUN wandb login "$wandb_api_key"
WORKDIR /usr/src/app
RUN chown -R 42420:42420 /usr/src
# do stuff at start
COPY entrypoint.sh /usr/src/app
ENTRYPOINT ["/bin/bash", "-c", "./entrypoint.sh && bash"]
```
entrypoint.sh with:
* a call to the `autosplit()` function ;
* a call to train.py to start the training.
```sh
#!/bin/bash
# split datasets into training, validation & test
python3 -c "from utils.datasets import autosplit; autosplit('../logos/images', annotated_only=True);"
# start the training
python3 -m torch.distributed.launch \
--nproc_per_node 4 train.py \
--img-size 1280 \
--epochs 100 \
--data ../logos/logo.yaml \
--weights yolov5m.pt \
--batch-size 64 \
--device 0,1,2,3 \
--project results \
--name "$(date +'%Y-%m-%d')" \
--exist-ok \
--workers 0
```
Full output from the server:
```
0%| | 0/11481 [00:00<?, ?it/s]
3%|▎ | 293/11481 [00:00<00:03, 2926.18it/s]
4%|▍ | 483/11481 [00:00<00:04, 2517.14it/s]
6%|▌ | 670/11481 [00:00<00:04, 2276.94it/s]
8%|▊ | 909/11481 [00:00<00:04, 2309.31it/s]
10%|▉ | 1103/11481 [00:00<00:04, 2181.77it/s]
11%|█ | 1280/11481 [00:00<00:05, 2035.86it/s]
13%|█▎ | 1457/11481 [00:00<00:05, 1936.71it/s]
15%|█▍ | 1699/11481 [00:00<00:04, 2059.93it/s]
17%|█▋ | 1935/11481 [00:00<00:04, 2139.96it/s]
19%|█▉ | 2234/11481 [00:01<00:03, 2338.47it/s]
22%|██▏ | 2469/11481 [00:01<00:04, 2164.21it/s]
23%|██▎ | 2689/11481 [00:01<00:04, 2037.83it/s]
25%|██▌ | 2897/11481 [00:01<00:04, 1946.45it/s]
27%|██▋ | 3096/11481 [00:01<00:04, 1909.82it/s]
29%|██▊ | 3290/11481 [00:01<00:04, 1840.54it/s]
30%|███ | 3477/11481 [00:01<00:04, 1772.23it/s]
32%|███▏ | 3671/11481 [00:01<00:04, 1819.05it/s]
34%|███▎ | 3855/11481 [00:01<00:04, 1778.36it/s]
35%|███▌ | 4060/11481 [00:02<00:04, 1851.52it/s]
37%|███▋ | 4273/11481 [00:02<00:03, 1924.63it/s]
39%|███▉ | 4468/11481 [00:02<00:03, 1872.27it/s]
41%|████ | 4658/11481 [00:02<00:03, 1879.67it/s]
43%|████▎ | 4891/11481 [00:02<00:03, 1994.13it/s]
44%|████▍ | 5108/11481 [00:02<00:03, 2042.15it/s]
47%|████▋ | 5435/11481 [00:02<00:02, 2301.21it/s]
50%|█████ | 5773/11481 [00:02<00:02, 2543.61it/s]
53%|█████▎ | 6047/11481 [00:02<00:02, 2598.59it/s]
55%|█████▌ | 6319/11481 [00:02<00:02, 2573.29it/s]
64%|██████▍ | 7351/11481 [00:03<00:01, 3320.80it/s]
68%|██████▊ | 7847/11481 [00:03<00:01, 2920.33it/s]
72%|███████▏ | 8263/11481 [00:03<00:01, 2642.29it/s]
75%|███████▌ | 8619/11481 [00:03<00:01, 2663.95it/s]
78%|███████▊ | 8950/11481 [00:03<00:01, 2476.00it/s]
81%|████████ | 9245/11481 [00:03<00:00, 2286.48it/s]
83%|████████▎ | 9510/11481 [00:04<00:00, 2122.86it/s]
85%|████████▍ | 9750/11481 [00:04<00:00, 2170.76it/s]
87%|████████▋ | 9987/11481 [00:04<00:00, 2163.66it/s]
89%|████████▉ | 10218/11481 [00:04<00:00, 2051.27it/s]
91%|█████████ | 10434/11481 [00:04<00:00, 1996.93it/s]
93%|█████████▎| 10642/11481 [00:04<00:00, 1875.36it/s]
94%|█████████▍| 10837/11481 [00:04<00:00, 1888.51it/s]
96%|█████████▌| 11031/11481 [00:04<00:00, 1875.07it/s]
98%|█████████▊| 11257/11481 [00:04<00:00, 1975.48it/s]
100%|██████████| 11481/11481 [00:04<00:00, 2308.00it/s]
Autosplitting images from ../logos/images, using *.txt labeled images only
/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py:163: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
logger.warn(
The module torch.distributed.launch is deprecated and going to be removed in future.Migrate to torch.distributed.run
WARNING:torch.distributed.run:--use_env is deprecated and will be removed in future releases.
Please read local_rank from `os.environ('LOCAL_RANK')` instead.
INFO:torch.distributed.launcher.api:Starting elastic_operator with launch configs:
entrypoint : train.py
min_nodes : 1
max_nodes : 1
nproc_per_node : 4
run_id : none
rdzv_backend : static
rdzv_endpoint : 127.0.0.1:29500
rdzv_configs : {'rank': 0, 'timeout': 900}
max_restarts : 3
monitor_interval : 5
log_dir : None
metrics_cfg : {}
INFO:torch.distributed.elastic.agent.server.local_elastic_agent:log directory set to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1
INFO:torch.distributed.elastic.agent.server.api:[default] starting workers for entrypoint: python3
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:52: FutureWarning: This is an experimental API and will be changed in future.
warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=0
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1, 2, 3]
role_ranks=[0, 1, 2, 3]
global_ranks=[0, 1, 2, 3]
role_world_sizes=[4, 4, 4, 4]
global_world_sizes=[4, 4, 4, 4]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_0/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_0/1/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_0/2/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_0/3/error.json
[34m[1mtrain: [0mweights=yolov5m.pt, cfg=, data=../logos/logo.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=100, batch_size=64, imgsz=1280, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1,2,3, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=0, project=results, entity=None, name=2021-08-13, exist_ok=True, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=0, freeze=0
[34m[1mgithub: [0mskipping check (Docker image), for updates see https://github.com/ultralytics/yolov5
YOLOv5 🚀 v5.0-360-gd9f23ed torch 1.9.0+cu102 CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)
Added key: store_based_barrier_key:1 to store for rank: 0
Rank 0: Completed store-based barrier for 4 nodes.
[34m[1mhyperparameters: [0mlr0=0.01, lrf=0.2, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
[34m[1mTensorBoard: [0mStart with 'tensorboard --logdir results', view at http://localhost:6006/
[W ProcessGroupNCCL.cpp:1569] Rank 3 using best-guess GPU 3 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[W ProcessGroupNCCL.cpp:1569] Rank 2 using best-guess GPU 2 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
[W ProcessGroupNCCL.cpp:1569] Rank 1 using best-guess GPU 1 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
wandb: Currently logged in as: hivacruz (use `wandb login --relogin` to force relogin)
CondaEnvException: Unable to determine environment
Please re-run this command with one of the following options:
* Provide an environment name via --name or -n
* Re-run this command inside an activated conda environment.
wandb: Tracking run with wandb version 0.12.0
wandb: Syncing run 2021-08-13
wandb: View project at https://wandb.ai/hivacruz/results
wandb: View run at https://wandb.ai/hivacruz/results/runs/b7blzdq6
wandb: Run data is saved locally in /usr/src/app/wandb/run-20210813_152353-b7blzdq6
wandb: Run `wandb offline` to turn off syncing.
[W ProcessGroupNCCL.cpp:1569] Rank 0 using best-guess GPU 0 to perform barrier as devices used by this process are currently unknown. This can potentially cause a hang if this rank to GPU mapping is incorrect.Specify device_ids in barrier() to force use of a particular device.
Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5m.pt to yolov5m.pt...
0% 0.00/41.1M [00:00<?, ?B/s]
4% 1.80M/41.1M [00:00<00:02, 18.6MB/s]
15% 6.01M/41.1M [00:00<00:01, 22.5MB/s]
40% 16.3M/41.1M [00:00<00:00, 29.6MB/s]
55% 22.7M/41.1M [00:00<00:00, 34.3MB/s]
72% 29.6M/41.1M [00:00<00:00, 40.8MB/s]
88% 36.1M/41.1M [00:00<00:00, 46.3MB/s]
100% 41.1M/41.1M [00:00<00:00, 56.7MB/s]
Overriding model.yaml nc=80 with nc=4
from n params module arguments
0 -1 1 5280 models.common.Focus [3, 48, 3]
1 -1 1 41664 models.common.Conv [48, 96, 3, 2]
2 -1 2 65280 models.common.C3 [96, 96, 2]
3 -1 1 166272 models.common.Conv [96, 192, 3, 2]
4 -1 6 629760 models.common.C3 [192, 192, 6]
5 -1 1 664320 models.common.Conv [192, 384, 3, 2]
6 -1 6 2512896 models.common.C3 [384, 384, 6]
7 -1 1 2655744 models.common.Conv [384, 768, 3, 2]
8 -1 1 1476864 models.common.SPP [768, 768, [5, 9, 13]]
9 -1 2 4134912 models.common.C3 [768, 768, 2, False]
10 -1 1 295680 models.common.Conv [768, 384, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 2 1182720 models.common.C3 [768, 384, 2, False]
14 -1 1 74112 models.common.Conv [384, 192, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 2 296448 models.common.C3 [384, 192, 2, False]
18 -1 1 332160 models.common.Conv [192, 192, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 2 1035264 models.common.C3 [384, 384, 2, False]
21 -1 1 1327872 models.common.Conv [384, 384, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 2 4134912 models.common.C3 [768, 768, 2, False]
24 [17, 20, 23] 1 36369 models.yolo.Detect [4, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [192, 384, 768]]
Model Summary: 391 layers, 21068529 parameters, 21068529 gradients, 50.4 GFLOPs
Transferred 500/506 items from yolov5m.pt
Scaled weight_decay = 0.0005
[34m[1moptimizer:[0m SGD with parameter groups 83 weight, 86 weight (no decay), 86 bias
[34m[1mtrain: [0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100% 9465/9465 [00:00<?, ?it/s][34m[1mtrain: [0mWARNING: Ignoring corrupted image and/or label /usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png: cannot identify image file '/usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png'
[34m[1mtrain: [0mWARNING: Ignoring corrupted image and/or label /usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png: cannot identify image file "/usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png"
[34m[1mtrain: [0mWARNING: Ignoring corrupted image and/or label /usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png: cannot identify image file '/usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png'
[34m[1mtrain: [0mWARNING: Ignoring corrupted image and/or label /usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png: cannot identify image file '/usr/src/logos/images/xxx/e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855.png'
[34m[1mtrain: [0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100% 9465/9465 [00:00<?, ?it/s]
[34m[1mval: [0mScanning '/usr/src/logos/autosplit_val.cache' images and labels... 996 found, 0 missing, 286 empty, 0 corrupted: 100% 996/996 [00:00<?, ?it/s]
[34m[1mval: [0mScanning '/usr/src/logos/autosplit_val.cache' images and labels... 996 found, 0 missing, 286 empty, 0 corrupted: 100% 996/996 [00:00<?, ?it/s]
Plotting labels...
[34m[1mtrain: [0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|██████████| 9465/9465 [00:00<?, ?it/s]
[34m[1mtrain: [0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|██████████| 9465/9465 [00:00<?, ?it/s]
[34m[1mtrain: [0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|██████████| 9465/9465 [00:00<?, ?it/s]
[34m[1mtrain: [0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|██████████| 9465/9465 [00:00<?, ?it/s]
[34m[1mtrain: [0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|██████████| 9465/9465 [00:00<?, ?it/s]
[34m[1mtrain: [0mScanning '/usr/src/logos/autosplit_train.cache' images and labels... 9461 found, 0 missing, 2701 empty, 4 corrupted: 100%|██████████| 9465/9465 [00:00<?, ?it/s]
[34m[1mautoanchor: [0mAnalyzing anchors... anchors/target = 5.73, Best Possible Recall (BPR) = 1.0000
Image sizes 1280 train, 1280 val
Using 0 dataloader workers
Logging results to results/2021-08-13
Starting training for 100 epochs...
Epoch gpu_mem box obj cls labels img_size
0% 0/148 [00:00<?, ?it/s]
0/99 19.8G 0.1254 0.08545 0.04525 30 1280: 0% 0/148 [00:11<?, ?it/s]
0/99 19.8G 0.1254 0.08545 0.04525 30 1280: 1% 1/148 [00:15<37:35, 15.34s/it]Reducer buckets have been rebuilt in this iteration.
0/99 21.5G 0.123 0.08545 0.04553 25 1280: 1% 1/148 [00:20<37:35, 15.34s/it]
0/99 21.5G 0.123 0.08545 0.04553 25 1280: 1% 2/148 [00:20<29:54, 12.29s/it]
0/99 21.5G 0.1212 0.08502 0.04532 17 1280: 1% 2/148 [00:25<29:54, 12.29s/it]
0/99 21.5G 0.1212 0.08502 0.04532 17 1280: 2% 3/148 [00:25<24:42, 10.22s/it]
0/99 21.5G 0.1217 0.08223 0.0452 17 1280: 2% 3/148 [00:31<24:42, 10.22s/it]
0/99 21.5G 0.1217 0.08223 0.0452 17 1280: 3% 4/148 [00:31<20:53, 8.70s/it]
0/99 21.5G 0.1213 0.07896 0.04531 20 1280: 3% 4/148 [00:36<20:53, 8.70s/it]
0/99 21.5G 0.1213 0.07896 0.04531 20 1280: 3% 5/148 [00:36<18:19, 7.69s/it]
0/99 21.5G 0.1211 0.07538 0.04512 25 1280: 3% 5/148 [00:41<18:19, 7.69s/it]
0/99 21.5G 0.1211 0.07538 0.04512 25 1280: 4% 6/148 [00:41<16:25, 6.94s/it]
0/99 21.5G 0.1203 0.07139 0.04532 13 1280: 4% 6/148 [00:46<16:25, 6.94s/it]
0/99 21.5G 0.1203 0.07139 0.04532 13 1280: 5% 7/148 [00:46<14:59, 6.38s/it]
0/99 21.5G 0.1193 0.06797 0.04522 20 1280: 5% 7/148 [00:51<14:59, 6.38s/it]
0/99 21.5G 0.1193 0.06797 0.04522 20 1280: 5% 8/148 [00:51<14:04, 6.03s/it]
0/99 21.5G 0.1192 0.06457 0.04495 21 1280: 5% 8/148 [00:57<14:04, 6.03s/it]
0/99 21.5G 0.1192 0.06457 0.04495 21 1280: 6% 9/148 [00:57<13:24, 5.79s/it]
0/99 21.5G 0.1181 0.06186 0.0449 24 1280: 6% 9/148 [01:02<13:24, 5.79s/it]
0/99 21.5G 0.1181 0.06186 0.0449 24 1280: 7% 10/148 [01:02<12:55, 5.62s/it]
0/99 21.5G 0.118 0.05965 0.04474 38 1280: 7% 10/148 [01:07<12:55, 5.62s/it]
0/99 21.5G 0.118 0.05965 0.04474 38 1280: 7% 11/148 [01:07<12:41, 5.56s/it]
0/99 21.5G 0.117 0.05714 0.04468 18 1280: 7% 11/148 [01:12<12:41, 5.56s/it]
0/99 21.5G 0.117 0.05714 0.04468 18 1280: 8% 12/148 [01:12<12:22, 5.46s/it]
0/99 21.5G 0.1164 0.05487 0.04446 20 1280: 8% 12/148 [01:18<12:22, 5.46s/it]
0/99 21.5G 0.1164 0.05487 0.04446 20 1280: 9% 13/148 [01:18<12:07, 5.39s/it]
0/99 21.5G 0.1158 0.05271 0.04426 19 1280: 9% 13/148 [01:23<12:07, 5.39s/it]
0/99 21.5G 0.1158 0.05271 0.04426 19 1280: 9% 14/148 [01:23<11:56, 5.35s/it]
0/99 21.5G 0.1152 0.05089 0.0441 22 1280: 9% 14/148 [01:28<11:56, 5.35s/it]
0/99 21.5G 0.1152 0.05089 0.0441 22 1280: 10% 15/148 [01:28<11:51, 5.35s/it]
0/99 21.5G 0.1143 0.04923 0.04396 20 1280: 10% 15/148 [01:34<11:51, 5.35s/it]
0/99 21.5G 0.1143 0.04923 0.04396 20 1280: 11% 16/148 [01:34<11:41, 5.32s/it]
0/99 21.5G 0.1138 0.04776 0.04378 22 1280: 11% 16/148 [01:39<11:41, 5.32s/it]
0/99 21.5G 0.1138 0.04776 0.04378 22 1280: 11% 17/148 [01:39<11:37, 5.33s/it]
0/99 21.5G 0.1132 0.04637 0.0436 19 1280: 11% 17/148 [01:44<11:37, 5.33s/it]
0/99 21.5G 0.1132 0.04637 0.0436 19 1280: 12% 18/148 [01:44<11:27, 5.29s/it]
0/99 21.5G 0.1122 0.04538 0.0434 25 1280: 12% 18/148 [01:49<11:27, 5.29s/it]
0/99 21.5G 0.1122 0.04538 0.0434 25 1280: 13% 19/148 [01:49<11:24, 5.31s/it]
0/99 21.5G 0.1116 0.04417 0.04321 20 1280: 13% 19/148 [01:55<11:24, 5.31s/it]
0/99 21.5G 0.1116 0.04417 0.04321 20 1280: 14% 20/148 [01:55<11:19, 5.31s/it]
0/99 21.5G 0.1109 0.04299 0.04305 16 1280: 14% 20/148 [02:00<11:19, 5.31s/it]
0/99 21.5G 0.1109 0.04299 0.04305 16 1280: 14% 21/148 [02:00<11:10, 5.28s/it]
0/99 21.5G 0.1102 0.04209 0.04288 21 1280: 14% 21/148 [02:05<11:10, 5.28s/it]
0/99 21.5G 0.1102 0.04209 0.04288 21 1280: 15% 22/148 [02:05<11:04, 5.28s/it]
0/99 21.5G 0.1095 0.04134 0.0427 23 1280: 15% 22/148 [02:10<11:04, 5.28s/it]
0/99 21.5G 0.1095 0.04134 0.0427 23 1280: 16% 23/148 [02:10<10:53, 5.23s/it]
0/99 21.5G 0.1084 0.04068 0.04254 22 1280: 16% 23/148 [02:16<10:53, 5.23s/it]
0/99 21.5G 0.1084 0.04068 0.04254 22 1280: 16% 24/148 [02:16<10:48, 5.23s/it]
0/99 21.5G 0.1076 0.03996 0.04231 22 1280: 16% 24/148 [02:21<10:48, 5.23s/it]
0/99 21.5G 0.1076 0.03996 0.04231 22 1280: 17% 25/148 [02:21<10:42, 5.22s/it]
0/99 21.5G 0.1068 0.03924 0.04201 19 1280: 17% 25/148 [02:26<10:42, 5.22s/it]
0/99 21.5G 0.1068 0.03924 0.04201 19 1280: 18% 26/148 [02:26<10:34, 5.20s/it]
0/99 21.5G 0.1059 0.0387 0.04182 21 1280: 18% 26/148 [02:31<10:34, 5.20s/it]
0/99 21.5G 0.1059 0.0387 0.04182 21 1280: 18% 27/148 [02:31<10:26, 5.18s/it]
0/99 21.5G 0.1053 0.03812 0.04158 21 1280: 18% 27/148 [02:36<10:26, 5.18s/it]
0/99 21.5G 0.1053 0.03812 0.04158 21 1280: 19% 28/148 [02:36<10:21, 5.18s/it]
0/99 21.5G 0.1046 0.03767 0.04139 24 1280: 19% 28/148 [02:41<10:21, 5.18s/it]
0/99 21.5G 0.1046 0.03767 0.04139 24 1280: 20% 29/148 [02:41<10:15, 5.17s/it]
0/99 21.5G 0.1041 0.03737 0.04124 33 1280: 20% 29/148 [02:47<10:15, 5.17s/it]
0/99 21.5G 0.1041 0.03737 0.04124 33 1280: 20% 30/148 [02:47<10:12, 5.19s/it]
0/99 21.5G 0.1033 0.03673 0.04106 12 1280: 20% 30/148 [02:52<10:12, 5.19s/it]
0/99 21.5G 0.1033 0.03673 0.04106 12 1280: 21% 31/148 [02:52<10:07, 5.19s/it]
0/99 21.5G 0.1025 0.03623 0.04103 17 1280: 21% 31/148 [02:57<10:07, 5.19s/it]
0/99 21.5G 0.1025 0.03623 0.04103 17 1280: 22% 32/148 [02:57<09:59, 5.16s/it]
0/99 21.5G 0.102 0.03585 0.04075 24 1280: 22% 32/148 [03:02<09:59, 5.16s/it]
0/99 21.5G 0.102 0.03585 0.04075 24 1280: 22% 33/148 [03:02<09:57, 5.19s/it]
0/99 21.5G 0.1014 0.03556 0.04061 24 1280: 22% 33/148 [03:08<09:57, 5.19s/it]
0/99 21.5G 0.1014 0.03556 0.04061 24 1280: 23% 34/148 [03:08<09:58, 5.25s/it]
0/99 21.5G 0.1009 0.03512 0.04034 18 1280: 23% 34/148 [03:13<09:58, 5.25s/it]
0/99 21.5G 0.1009 0.03512 0.04034 18 1280: 24% 35/148 [03:13<09:49, 5.22s/it]
0/99 21.5G 0.1002 0.03484 0.04013 23 1280: 24% 35/148 [03:18<09:49, 5.22s/it]
0/99 21.5G 0.1002 0.03484 0.04013 23 1280: 24% 36/148 [03:18<09:49, 5.26s/it]
0/99 21.5G 0.09946 0.03466 0.03991 25 1280: 24% 36/148 [03:23<09:49, 5.26s/it]
0/99 21.5G 0.09946 0.03466 0.03991 25 1280: 25% 37/148 [03:23<09:42, 5.24s/it]
0/99 21.5G 0.09885 0.03448 0.03971 27 1280: 25% 37/148 [03:28<09:42, 5.24s/it]
0/99 21.5G 0.09885 0.03448 0.03971 27 1280: 26% 38/148 [03:28<09:35, 5.23s/it]
0/99 21.5G 0.09817 0.03415 0.03964 17 1280: 26% 38/148 [03:34<09:35, 5.23s/it]
0/99 21.5G 0.09817 0.03415 0.03964 17 1280: 26% 39/148 [03:34<09:32, 5.26s/it]
0/99 21.5G 0.09746 0.03382 0.0396 16 1280: 26% 39/148 [03:39<09:32, 5.26s/it]
0/99 21.5G 0.09746 0.03382 0.0396 16 1280: 27% 40/148 [03:39<09:25, 5.24s/it]
0/99 21.5G 0.09692 0.03345 0.0395 18 1280: 27% 40/148 [03:44<09:25, 5.24s/it]
0/99 21.5G 0.09692 0.03345 0.0395 18 1280: 28% 41/148 [03:44<09:18, 5.22s/it]
0/99 21.5G 0.09618 0.03334 0.0393 23 1280: 28% 41/148 [03:49<09:18, 5.22s/it]
0/99 21.5G 0.09618 0.03334 0.0393 23 1280: 28% 42/148 [03:49<09:14, 5.23s/it]
0/99 21.5G 0.09566 0.03295 0.03918 12 1280: 28% 42/148 [03:55<09:14, 5.23s/it]
0/99 21.5G 0.09566 0.03295 0.03918 12 1280: 29% 43/148 [03:55<09:10, 5.24s/it]
0/99 21.5G 0.09514 0.03277 0.03899 21 1280: 29% 43/148 [04:00<09:10, 5.24s/it]
0/99 21.5G 0.09514 0.03277 0.03899 21 1280: 30% 44/148 [04:00<09:05, 5.24s/it]
0/99 21.5G 0.09457 0.03248 0.03881 16 1280: 30% 44/148 [04:05<09:05, 5.24s/it]
0/99 21.5G 0.09457 0.03248 0.03881 16 1280: 30% 45/148 [04:05<08:58, 5.23s/it]
0/99 21.5G 0.09407 0.03232 0.03866 21 1280: 30% 45/148 [04:10<08:58, 5.23s/it]
0/99 21.5G 0.09407 0.03232 0.03866 21 1280: 31% 46/148 [04:10<08:55, 5.25s/it]
0/99 21.5G 0.09346 0.03217 0.03852 20 1280: 31% 46/148 [04:16<08:55, 5.25s/it]
0/99 21.5G 0.09346 0.03217 0.03852 20 1280: 32% 47/148 [04:16<08:47, 5.23s/it]
0/99 21.5G 0.09273 0.03227 0.03832 29 1280: 32% 47/148 [04:21<08:47, 5.23s/it]
0/99 21.5G 0.09273 0.03227 0.03832 29 1280: 32% 48/148 [04:21<08:43, 5.23s/it]
0/99 21.5G 0.09208 0.0321 0.03811 18 1280: 32% 48/148 [04:26<08:43, 5.23s/it]
0/99 21.5G 0.09208 0.0321 0.03811 18 1280: 33% 49/148 [04:26<08:41, 5.27s/it]
0/99 21.5G 0.09158 0.03196 0.03786 20 1280: 33% 49/148 [04:31<08:41, 5.27s/it]
0/99 21.5G 0.09158 0.03196 0.03786 20 1280: 34% 50/148 [04:31<08:36, 5.27s/it]
0/99 21.5G 0.09112 0.03185 0.03766 23 1280: 34% 50/148 [04:37<08:36, 5.27s/it]
0/99 21.5G 0.09112 0.03185 0.03766 23 1280: 34% 51/148 [04:37<08:30, 5.26s/it]
0/99 21.5G 0.09058 0.03176 0.03749 21 1280: 34% 51/148 [04:42<08:30, 5.26s/it]
0/99 21.5G 0.09058 0.03176 0.03749 21 1280: 35% 52/148 [04:42<08:20, 5.21s/it]
0/99 21.5G 0.0901 0.03162 0.03729 22 1280: 35% 52/148 [04:47<08:20, 5.21s/it]
0/99 21.5G 0.0901 0.03162 0.03729 22 1280: 36% 53/148 [04:47<08:16, 5.22s/it]
0/99 21.5G 0.08952 0.03155 0.03716 20 1280: 36% 53/148 [04:52<08:16, 5.22s/it]
0/99 21.5G 0.08952 0.03155 0.03716 20 1280: 36% 54/148 [04:52<08:09, 5.20s/it]
0/99 21.5G 0.08881 0.03129 0.03692 12 1280: 36% 54/148 [04:57<08:09, 5.20s/it]
0/99 21.5G 0.08881 0.03129 0.03692 12 1280: 37% 55/148 [04:57<08:00, 5.17s/it]
0/99 21.5G 0.08828 0.03122 0.03682 23 1280: 37% 55/148 [05:02<08:00, 5.17s/it]
0/99 21.5G 0.08828 0.03122 0.03682 23 1280: 38% 56/148 [05:02<07:54, 5.16s/it]
0/99 21.5G 0.0878 0.03132 0.03663 32 1280: 38% 56/148 [05:08<07:54, 5.16s/it]
0/99 21.5G 0.0878 0.03132 0.03663 32 1280: 39% 57/148 [05:08<07:48, 5.14s/it]
0/99 21.5G 0.08735 0.03124 0.03645 23 1280: 39% 57/148 [05:13<07:48, 5.14s/it]
0/99 21.5G 0.08735 0.03124 0.03645 23 1280: 39% 58/148 [05:13<07:41, 5.13s/it]
0/99 21.5G 0.0869 0.03122 0.03627 27 1280: 39% 58/148 [05:18<07:41, 5.13s/it]
0/99 21.5G 0.0869 0.03122 0.03627 27 1280: 40% 59/148 [05:18<07:39, 5.16s/it]
0/99 21.5G 0.08649 0.03112 0.03606 21 1280: 40% 59/148 [05:23<07:39, 5.16s/it]
0/99 21.5G 0.08649 0.03112 0.03606 21 1280: 41% 60/148 [05:23<07:34, 5.17s/it]
0/99 21.5G 0.08614 0.031 0.03593 19 1280: 41% 60/148 [05:28<07:34, 5.17s/it]
0/99 21.5G 0.08614 0.031 0.03593 19 1280: 41% 61/148 [05:28<07:34, 5.23s/it]
0/99 21.5G 0.08561 0.03092 0.03564 24 1280: 41% 61/148 [05:34<07:34, 5.23s/it]
0/99 21.5G 0.08561 0.03092 0.03564 24 1280: 42% 62/148 [05:34<07:28, 5.21s/it]
0/99 21.5G 0.08514 0.03078 0.03549 17 1280: 42% 62/148 [05:39<07:28, 5.21s/it]
0/99 21.5G 0.08514 0.03078 0.03549 17 1280: 43% 63/148 [05:39<07:22, 5.21s/it]
0/99 21.5G 0.08474 0.03067 0.03525 19 1280: 43% 63/148 [05:44<07:22, 5.21s/it]
0/99 21.5G 0.08474 0.03067 0.03525 19 1280: 43% 64/148 [05:44<07:13, 5.16s/it]
0/99 21.5G 0.08431 0.03063 0.03502 23 1280: 43% 64/148 [05:49<07:13, 5.16s/it]
0/99 21.5G 0.08431 0.03063 0.03502 23 1280: 44% 65/148 [05:49<07:09, 5.18s/it]
0/99 21.5G 0.08386 0.0306 0.03476 27 1280: 44% 65/148 [05:54<07:09, 5.18s/it]
0/99 21.5G 0.08386 0.0306 0.03476 27 1280: 45% 66/148 [05:54<07:03, 5.17s/it]
0/99 21.5G 0.0834 0.03056 0.03451 23 1280: 45% 66/148 [05:59<07:03, 5.17s/it]
0/99 21.5G 0.0834 0.03056 0.03451 23 1280: 45% 67/148 [05:59<06:58, 5.17s/it]
0/99 21.5G 0.08308 0.03052 0.0343 22 1280: 45% 67/148 [06:04<06:58, 5.17s/it]
0/99 21.5G 0.08308 0.03052 0.0343 22 1280: 46% 68/148 [06:04<06:50, 5.13s/it]
0/99 21.5G 0.08266 0.03037 0.03418 16 1280: 46% 68/148 [06:09<06:50, 5.13s/it]
0/99 21.5G 0.08266 0.03037 0.03418 16 1280: 47% 69/148 [06:09<06:42, 5.10s/it]
0/99 21.5G 0.08229 0.03032 0.03391 22 1280: 47% 69/148 [06:15<06:42, 5.10s/it]
0/99 21.5G 0.08229 0.03032 0.03391 22 1280: 47% 70/148 [06:15<06:41, 5.15s/it]
0/99 21.5G 0.08184 0.03027 0.03378 21 1280: 47% 70/148 [06:20<06:41, 5.15s/it]
0/99 21.5G 0.08184 0.03027 0.03378 21 1280: 48% 71/148 [06:20<06:37, 5.16s/it]
0/99 21.5G 0.08163 0.03012 0.03356 17 1280: 48% 71/148 [06:25<06:37, 5.16s/it]
0/99 21.5G 0.08163 0.03012 0.03356 17 1280: 49% 72/148 [06:25<06:32, 5.17s/it]
0/99 21.5G 0.08134 0.03009 0.03335 24 1280: 49% 72/148 [06:30<06:32, 5.17s/it]
0/99 21.5G 0.08134 0.03009 0.03335 24 1280: 49% 73/148 [06:30<06:30, 5.21s/it]
0/99 21.5G 0.08105 0.03004 0.03313 26 1280: 49% 73/148 [06:35<06:30, 5.21s/it]
0/99 21.5G 0.08105 0.03004 0.03313 26 1280: 50% 74/148 [06:36<06:22, 5.17s/it]
0/99 21.5G 0.08076 0.02988 0.03302 15 1280: 50% 74/148 [06:41<06:22, 5.17s/it]
0/99 21.5G 0.08076 0.02988 0.03302 15 1280: 51% 75/148 [06:41<06:18, 5.19s/it]
0/99 21.5G 0.08024 0.02988 0.03276 27 1280: 51% 75/148 [06:46<06:18, 5.19s/it]
0/99 21.5G 0.08024 0.02988 0.03276 27 1280: 51% 76/148 [06:46<06:13, 5.19s/it]
0/99 21.5G 0.07997 0.02982 0.03257 22 1280: 51% 76/148 [06:51<06:13, 5.19s/it]
0/99 21.5G 0.07997 0.02982 0.03257 22 1280: 52% 77/148 [06:51<06:05, 5.15s/it]
0/99 21.5G 0.07961 0.02976 0.03236 23 1280: 52% 77/148 [06:56<06:05, 5.15s/it]
0/99 21.5G 0.07961 0.02976 0.03236 23 1280: 53% 78/148 [06:56<05:58, 5.12s/it]
0/99 21.5G 0.07921 0.02966 0.03215 18 1280: 53% 78/148 [07:01<05:58, 5.12s/it]
0/99 21.5G 0.07921 0.02966 0.03215 18 1280: 53% 79/148 [07:01<05:53, 5.13s/it]
0/99 21.5G 0.07888 0.02955 0.03198 17 1280: 53% 79/148 [07:06<05:53, 5.13s/it]
0/99 21.5G 0.07888 0.02955 0.03198 17 1280: 54% 80/148 [07:06<05:48, 5.13s/it]
0/99 21.5G 0.0785 0.02954 0.03176 26 1280: 54% 80/148 [07:11<05:48, 5.13s/it]
0/99 21.5G 0.0785 0.02954 0.03176 26 1280: 55% 81/148 [07:11<05:44, 5.14s/it]
0/99 21.5G 0.07826 0.02949 0.03158 27 1280: 55% 81/148 [07:17<05:44, 5.14s/it]
0/99 21.5G 0.07826 0.02949 0.03158 27 1280: 55% 82/148 [07:17<05:39, 5.14s/it]
0/99 21.5G 0.07797 0.02942 0.03137 21 1280: 55% 82/148 [07:22<05:39, 5.14s/it]
0/99 21.5G 0.07797 0.02942 0.03137 21 1280: 56% 83/148 [07:22<05:34, 5.14s/it]
0/99 21.5G 0.07772 0.02932 0.03122 20 1280: 56% 83/148 [07:27<05:34, 5.14s/it]
0/99 21.5G 0.07772 0.02932 0.03122 20 1280: 57% 84/148 [07:27<05:29, 5.15s/it]
0/99 21.5G 0.07741 0.02926 0.03102 22 1280: 57% 84/148 [07:32<05:29, 5.15s/it]
0/99 21.5G 0.07741 0.02926 0.03102 22 1280: 57% 85/148 [07:32<05:24, 5.14s/it]
0/99 21.5G 0.07713 0.02919 0.03086 20 1280: 57% 85/148 [07:37<05:24, 5.14s/it]
0/99 21.5G 0.07713 0.02919 0.03086 20 1280: 58% 86/148 [07:37<05:19, 5.16s/it]
0/99 21.5G 0.07676 0.02913 0.03067 23 1280: 58% 86/148 [07:42<05:19, 5.16s/it]
0/99 21.5G 0.07676 0.02913 0.03067 23 1280: 59% 87/148 [07:42<05:14, 5.15s/it]
0/99 21.5G 0.07667 0.02898 0.03047 20 1280: 59% 87/148 [07:48<05:14, 5.15s/it]
0/99 21.5G 0.07667 0.02898 0.03047 20 1280: 59% 88/148 [07:48<05:09, 5.15s/it]
0/99 21.5G 0.07636 0.02885 0.03026 15 1280: 59% 88/148 [07:53<05:09, 5.15s/it]
0/99 21.5G 0.07636 0.02885 0.03026 15 1280: 60% 89/148 [07:53<05:05, 5.18s/it]
0/99 21.5G 0.07617 0.02875 0.03012 20 1280: 60% 89/148 [07:58<05:05, 5.18s/it]
0/99 21.5G 0.07617 0.02875 0.03012 20 1280: 61% 90/148 [07:58<05:01, 5.19s/it]
0/99 21.5G 0.07593 0.02859 0.02993 13 1280: 61% 90/148 [08:03<05:01, 5.19s/it]
0/99 21.5G 0.07593 0.02859 0.02993 13 1280: 61% 91/148 [08:03<04:55, 5.18s/it]
0/99 21.5G 0.07569 0.0285 0.02973 23 1280: 61% 91/148 [08:08<04:55, 5.18s/it]
0/99 21.5G 0.07569 0.0285 0.02973 23 1280: 62% 92/148 [08:08<04:48, 5.15s/it]
0/99 21.5G 0.07544 0.02852 0.02957 31 1280: 62% 92/148 [08:13<04:48, 5.15s/it]
0/99 21.5G 0.07544 0.02852 0.02957 31 1280: 63% 93/148 [08:13<04:42, 5.14s/it]
0/99 21.5G 0.07528 0.02857 0.02948 44 1280: 63% 93/148 [08:19<04:42, 5.14s/it]
0/99 21.5G 0.07528 0.02857 0.02948 44 1280: 64% 94/148 [08:19<04:38, 5.17s/it]
0/99 21.5G 0.07504 0.02846 0.02929 20 1280: 64% 94/148 [08:24<04:38, 5.17s/it]
0/99 21.5G 0.07504 0.02846 0.02929 20 1280: 64% 95/148 [08:24<04:33, 5.16s/it]
0/99 21.5G 0.07486 0.02841 0.02914 26 1280: 64% 95/148 [08:29<04:33, 5.16s/it]
0/99 21.5G 0.07486 0.02841 0.02914 26 1280: 65% 96/148 [08:29<04:29, 5.19s/it]
0/99 21.5G 0.07459 0.02831 0.02894 18 1280: 65% 96/148 [08:34<04:29, 5.19s/it]
0/99 21.5G 0.07459 0.02831 0.02894 18 1280: 66% 97/148 [08:34<04:24, 5.19s/it]
0/99 21.5G 0.07448 0.02822 0.02879 23 1280: 66% 97/148 [08:39<04:24, 5.19s/it]
0/99 21.5G 0.07448 0.02822 0.02879 23 1280: 66% 98/148 [08:39<04:19, 5.19s/it]
0/99 21.5G 0.07422 0.02813 0.02864 17 1280: 66% 98/148 [08:45<04:19, 5.19s/it]
0/99 21.5G 0.07422 0.02813 0.02864 17 1280: 67% 99/148 [08:45<04:14, 5.20s/it]
0/99 21.5G 0.07409 0.02812 0.02848 31 1280: 67% 99/148 [08:50<04:14, 5.20s/it]
0/99 21.5G 0.07409 0.02812 0.02848 31 1280: 68% 100/148 [08:50<04:09, 5.19s/it]
0/99 21.5G 0.07385 0.02799 0.0283 14 1280: 68% 100/148 [08:55<04:09, 5.19s/it]
0/99 21.5G 0.07385 0.02799 0.0283 14 1280: 68% 101/148 [08:55<04:03, 5.17s/it]
0/99 21.5G 0.07362 0.02784 0.02811 11 1280: 68% 101/148 [09:00<04:03, 5.17s/it]
0/99 21.5G 0.07362 0.02784 0.02811 11 1280: 69% 102/148 [09:00<03:58, 5.19s/it]
0/99 21.5G 0.07344 0.02783 0.02795 28 1280: 69% 102/148 [09:05<03:58, 5.19s/it]
0/99 21.5G 0.07344 0.02783 0.02795 28 1280: 70% 103/148 [09:05<03:53, 5.19s/it]
0/99 21.5G 0.07319 0.02775 0.02781 19 1280: 70% 103/148 [09:10<03:53, 5.19s/it]
0/99 21.5G 0.07319 0.02775 0.02781 19 1280: 70% 104/148 [09:10<03:47, 5.17s/it]
0/99 21.5G 0.07306 0.02766 0.02767 20 1280: 70% 104/148 [09:16<03:47, 5.17s/it]
0/99 21.5G 0.07306 0.02766 0.02767 20 1280: 71% 105/148 [09:16<03:42, 5.19s/it]
0/99 21.5G 0.07292 0.02756 0.02751 20 1280: 71% 105/148 [09:21<03:42, 5.19s/it]
0/99 21.5G 0.07292 0.02756 0.02751 20 1280: 72% 106/148 [09:21<03:37, 5.18s/it]
0/99 21.5G 0.07276 0.02755 0.02735 34 1280: 72% 106/148 [09:26<03:37, 5.18s/it]
0/99 21.5G 0.07276 0.02755 0.02735 34 1280: 72% 107/148 [09:26<03:32, 5.17s/it]
0/99 21.5G 0.07253 0.02748 0.02719 21 1280: 72% 107/148 [09:31<03:32, 5.17s/it]
0/99 21.5G 0.07253 0.02748 0.02719 21 1280: 73% 108/148 [09:31<03:26, 5.16s/it]
0/99 21.5G 0.0724 0.02734 0.02702 15 1280: 73% 108/148 [09:36<03:26, 5.16s/it]
0/99 21.5G 0.0724 0.02734 0.02702 15 1280: 74% 109/148 [09:36<03:22, 5.18s/it]
0/99 21.5G 0.0723 0.02721 0.02692 16 1280: 74% 109/148 [09:41<03:22, 5.18s/it]
0/99 21.5G 0.0723 0.02721 0.02692 16 1280: 74% 110/148 [09:41<03:16, 5.17s/it]
0/99 21.5G 0.07217 0.02713 0.02676 21 1280: 74% 110/148 [09:47<03:16, 5.17s/it]
0/99 21.5G 0.07217 0.02713 0.02676 21 1280: 75% 111/148 [09:47<03:12, 5.20s/it]
0/99 21.5G 0.07203 0.02699 0.02661 13 1280: 75% 111/148 [09:52<03:12, 5.20s/it]
0/99 21.5G 0.07203 0.02699 0.02661 13 1280: 76% 112/148 [09:52<03:06, 5.18s/it]
0/99 21.5G 0.07184 0.02689 0.02644 18 1280: 76% 112/148 [09:57<03:06, 5.18s/it]
0/99 21.5G 0.07184 0.02689 0.02644 18 1280: 76% 113/148 [09:57<03:01, 5.18s/it]
0/99 21.5G 0.07167 0.02679 0.0263 14 1280: 76% 113/148 [10:02<03:01, 5.18s/it]
0/99 21.5G 0.07167 0.02679 0.0263 14 1280: 77% 114/148 [10:02<02:56, 5.18s/it]
0/99 21.5G 0.07165 0.02674 0.02617 29 1280: 77% 114/148 [10:07<02:56, 5.18s/it]
0/99 21.5G 0.07165 0.02674 0.02617 29 1280: 78% 115/148 [10:07<02:50, 5.17s/it]
0/99 21.5G 0.0716 0.02665 0.02604 22 1280: 78% 115/148 [10:13<02:50, 5.17s/it]
0/99 21.5G 0.0716 0.02665 0.02604 22 1280: 78% 116/148 [10:13<02:46, 5.21s/it]
0/99 21.5G 0.07151 0.02662 0.02591 31 1280: 78% 116/148 [10:18<02:46, 5.21s/it]
0/99 21.5G 0.07151 0.02662 0.02591 31 1280: 79% 117/148 [10:18<02:41, 5.19s/it]
0/99 21.5G 0.07148 0.02655 0.02578 29 1280: 79% 117/148 [10:23<02:41, 5.19s/it]
0/99 21.5G 0.07148 0.02655 0.02578 29 1280: 80% 118/148 [10:23<02:34, 5.16s/it]
0/99 21.5G 0.07141 0.02648 0.02563 23 1280: 80% 118/148 [10:28<02:34, 5.16s/it]
0/99 21.5G 0.07141 0.02648 0.02563 23 1280: 80% 119/148 [10:28<02:29, 5.16s/it]
0/99 21.5G 0.07135 0.02639 0.0255 23 1280: 80% 119/148 [10:33<02:29, 5.16s/it]
0/99 21.5G 0.07135 0.02639 0.0255 23 1280: 81% 120/148 [10:33<02:23, 5.13s/it]
0/99 21.5G 0.07127 0.02628 0.02537 16 1280: 81% 120/148 [10:38<02:23, 5.13s/it]
0/99 21.5G 0.07127 0.02628 0.02537 16 1280: 82% 121/148 [10:38<02:18, 5.14s/it]
0/99 21.5G 0.07108 0.02623 0.02526 22 1280: 82% 121/148 [10:44<02:18, 5.14s/it]
0/99 21.5G 0.07108 0.02623 0.02526 22 1280: 82% 122/148 [10:44<02:14, 5.17s/it]
0/99 21.5G 0.07101 0.02615 0.02513 24 1280: 82% 122/148 [10:49<02:14, 5.17s/it]
0/99 21.5G 0.07101 0.02615 0.02513 24 1280: 83% 123/148 [10:49<02:08, 5.15s/it]
0/99 21.5G 0.07088 0.02601 0.02498 7 1280: 83% 123/148 [10:54<02:08, 5.15s/it]
0/99 21.5G 0.07088 0.02601 0.02498 7 1280: 84% 124/148 [10:54<02:03, 5.15s/it]
0/99 21.5G 0.07076 0.02593 0.02488 18 1280: 84% 124/148 [10:59<02:03, 5.15s/it]
0/99 21.5G 0.07076 0.02593 0.02488 18 1280: 84% 125/148 [10:59<01:59, 5.18s/it]
0/99 21.5G 0.07068 0.02587 0.02475 26 1280: 84% 125/148 [11:04<01:59, 5.18s/it]
0/99 21.5G 0.07068 0.02587 0.02475 26 1280: 85% 126/148 [11:04<01:53, 5.17s/it]
0/99 21.5G 0.07062 0.02578 0.02462 24 1280: 85% 126/148 [11:09<01:53, 5.17s/it]
0/99 21.5G 0.07062 0.02578 0.02462 24 1280: 86% 127/148 [11:09<01:48, 5.17s/it]
0/99 21.5G 0.07049 0.0257 0.02448 21 1280: 86% 127/148 [11:15<01:48, 5.17s/it]
0/99 21.5G 0.07049 0.0257 0.02448 21 1280: 86% 128/148 [11:15<01:44, 5.22s/it]
0/99 21.5G 0.07035 0.02562 0.0244 17 1280: 86% 128/148 [11:20<01:44, 5.22s/it]
0/99 21.5G 0.07035 0.02562 0.0244 17 1280: 87% 129/148 [11:20<01:38, 5.20s/it]
0/99 21.5G 0.0702 0.02554 0.02428 20 1280: 87% 129/148 [11:25<01:38, 5.20s/it]
0/99 21.5G 0.0702 0.02554 0.02428 20 1280: 88% 130/148 [11:25<01:33, 5.19s/it]
0/99 21.5G 0.07006 0.02554 0.02416 30 1280: 88% 130/148 [11:30<01:33, 5.19s/it]
0/99 21.5G 0.07006 0.02554 0.02416 30 1280: 89% 131/148 [11:30<01:28, 5.18s/it]
0/99 21.5G 0.07002 0.02543 0.02406 15 1280: 89% 131/148 [11:35<01:28, 5.18s/it]
0/99 21.5G 0.07002 0.02543 0.02406 15 1280: 89% 132/148 [11:35<01:23, 5.20s/it]
0/99 21.5G 0.06992 0.02534 0.02394 17 1280: 89% 132/148 [11:41<01:23, 5.20s/it]
0/99 21.5G 0.06992 0.02534 0.02394 17 1280: 90% 133/148 [11:41<01:18, 5.20s/it]
0/99 21.5G 0.06975 0.02527 0.02384 17 1280: 90% 133/148 [11:46<01:18, 5.20s/it]
0/99 21.5G 0.06975 0.02527 0.02384 17 1280: 91% 134/148 [11:46<01:12, 5.19s/it]
0/99 21.5G 0.06961 0.02519 0.02371 19 1280: 91% 134/148 [11:51<01:12, 5.19s/it]
0/99 21.5G 0.06961 0.02519 0.02371 19 1280: 91% 135/148 [11:51<01:07, 5.16s/it]
0/99 21.5G 0.06958 0.02512 0.0236 24 1280: 91% 135/148 [11:56<01:07, 5.16s/it]
0/99 21.5G 0.06958 0.02512 0.0236 24 1280: 92% 136/148 [11:56<01:01, 5.16s/it]
0/99 21.5G 0.06949 0.02507 0.02349 25 1280: 92% 136/148 [12:01<01:01, 5.16s/it]
0/99 21.5G 0.06949 0.02507 0.02349 25 1280: 93% 137/148 [12:01<00:57, 5.19s/it]
0/99 21.5G 0.0694 0.02504 0.02339 31 1280: 93% 137/148 [12:06<00:57, 5.19s/it]
0/99 21.5G 0.0694 0.02504 0.02339 31 1280: 93% 138/148 [12:06<00:51, 5.17s/it]
0/99 21.5G 0.0693 0.02499 0.02329 23 1280: 93% 138/148 [12:12<00:51, 5.17s/it]
0/99 21.5G 0.0693 0.02499 0.02329 23 1280: 94% 139/148 [12:12<00:46, 5.19s/it]
0/99 21.5G 0.06923 0.02492 0.02318 22 1280: 94% 139/148 [12:17<00:46, 5.19s/it]
0/99 21.5G 0.06923 0.02492 0.02318 22 1280: 95% 140/148 [12:17<00:41, 5.18s/it]
0/99 21.5G 0.06913 0.02487 0.02307 25 1280: 95% 140/148 [12:22<00:41, 5.18s/it]
0/99 21.5G 0.06913 0.02487 0.02307 25 1280: 95% 141/148 [12:22<00:36, 5.18s/it]
0/99 21.5G 0.06908 0.02482 0.02299 22 1280: 95% 141/148 [12:27<00:36, 5.18s/it]
0/99 21.5G 0.06908 0.02482 0.02299 22 1280: 96% 142/148 [12:27<00:30, 5.15s/it]
0/99 21.5G 0.06899 0.02473 0.02288 18 1280: 96% 142/148 [12:32<00:30, 5.15s/it]
0/99 21.5G 0.06899 0.02473 0.02288 18 1280: 97% 143/148 [12:32<00:25, 5.16s/it]
0/99 21.5G 0.06892 0.0247 0.0228 39 1280: 97% 143/148 [12:37<00:25, 5.16s/it]
0/99 21.5G 0.06892 0.0247 0.0228 39 1280: 97% 144/148 [12:37<00:20, 5.17s/it]
0/99 21.5G 0.06881 0.02464 0.0227 24 1280: 97% 144/148 [12:43<00:20, 5.17s/it]
0/99 21.5G 0.06881 0.02464 0.0227 24 1280: 98% 145/148 [12:43<00:15, 5.20s/it]
0/99 21.5G 0.06878 0.02462 0.02261 38 1280: 98% 145/148 [12:48<00:15, 5.20s/it]
0/99 21.5G 0.06878 0.02462 0.02261 38 1280: 99% 146/148 [12:48<00:10, 5.22s/it]
0/99 21.5G 0.06871 0.02453 0.0225 14 1280: 99% 146/148 [12:53<00:10, 5.22s/it]
0/99 21.5G 0.06871 0.02453 0.0225 14 1280: 99% 147/148 [12:53<00:05, 5.20s/it]
0/99 17.4G 0.06859 0.02448 0.0224 24 1280: 99% 147/148 [13:06<00:05, 5.20s/it]
0/99 17.4G 0.06859 0.02448 0.0224 24 1280: 100% 148/148 [13:06<00:00, 7.56s/it]
0/99 17.4G 0.06859 0.02448 0.0224 24 1280: 100% 148/148 [13:06<00:00, 5.32s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 0% 0/32 [00:00<?, ?it/s]
Class Images Labels P R mAP@.5 mAP@.5:.95: 3% 1/32 [00:05<02:42, 5.23s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 6% 2/32 [00:08<02:16, 4.57s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 9% 3/32 [00:11<01:56, 4.03s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 12% 4/32 [00:13<01:42, 3.65s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 16% 5/32 [00:16<01:31, 3.40s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 19% 6/32 [00:19<01:23, 3.22s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 22% 7/32 [00:22<01:17, 3.11s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 25% 8/32 [00:25<01:12, 3.02s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 28% 9/32 [00:27<01:07, 2.93s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 31% 10/32 [00:30<01:03, 2.89s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 34% 11/32 [00:33<01:00, 2.88s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 38% 12/32 [00:36<00:56, 2.85s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 41% 13/32 [00:39<00:53, 2.84s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 44% 14/32 [00:41<00:50, 2.82s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 47% 15/32 [00:44<00:47, 2.80s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 50% 16/32 [00:47<00:44, 2.80s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 53% 17/32 [00:50<00:42, 2.85s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 56% 18/32 [00:53<00:39, 2.81s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 59% 19/32 [00:55<00:36, 2.81s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 62% 20/32 [00:58<00:33, 2.80s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 66% 21/32 [01:01<00:30, 2.79s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 69% 22/32 [01:04<00:27, 2.80s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 72% 23/32 [01:07<00:25, 2.80s/it]
Class Images Labels P R mAP@.5 mAP@.5:.95: 75% 24/32 [01:09<00:22, 2.83s/it][E ProcessGroupNCCL.cpp:566] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66853 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:566] [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66854 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:566] [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66664 milliseconds before timing out.
Class Images Labels P R mAP@.5 mAP@.5:.95: 78% 25/32 [01:12<00:20, 2.86s/it][E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 3] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66664 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 2] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66854 milliseconds before timing out.
[E ProcessGroupNCCL.cpp:325] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=BROADCAST, Timeout(ms)=60000) ran for 66853 milliseconds before timing out.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 1 (pid: 174) of binary: /opt/conda/bin/python3
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 3/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=1
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1, 2, 3]
role_ranks=[0, 1, 2, 3]
global_ranks=[0, 1, 2, 3]
role_world_sizes=[4, 4, 4, 4]
global_world_sizes=[4, 4, 4, 4]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_1/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_1/1/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_1/2/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_1/3/error.json
[34m[1mtrain: [0mweights=yolov5m.pt, cfg=, data=../logos/logo.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=100, batch_size=64, imgsz=1280, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1,2,3, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=0, project=results, entity=None, name=2021-08-13, exist_ok=True, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=0, freeze=0
[34m[1mgithub: [0mskipping check (Docker image), for updates see https://github.com/ultralytics/yolov5
YOLOv5 🚀 v5.0-360-gd9f23ed torch 1.9.0+cu102 CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)
Added key: store_based_barrier_key:1 to store for rank: 0
/opt/conda/lib/python3.8/multiprocessing/resource_tracker.py:216: UserWarning: resource_tracker: There appear to be 6 leaked semaphore objects to clean up at shutdown
warnings.warn('resource_tracker: There appear to be %d '
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 2, for key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 3, for key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
Traceback (most recent call last):
Traceback (most recent call last):
File "train.py", line 600, in <module>
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeErrorraise RuntimeError(:
Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
RuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=4, worker_count=8, timeout=0:01:00)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 298) of binary: /opt/conda/bin/python3
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 2/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=2
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1, 2, 3]
role_ranks=[0, 1, 2, 3]
global_ranks=[0, 1, 2, 3]
role_world_sizes=[4, 4, 4, 4]
global_world_sizes=[4, 4, 4, 4]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_2/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_2/1/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_2/2/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_2/3/error.json
[34m[1mtrain: [0mweights=yolov5m.pt, cfg=, data=../logos/logo.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=100, batch_size=64, imgsz=1280, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1,2,3, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=0, project=results, entity=None, name=2021-08-13, exist_ok=True, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=0, freeze=0
[34m[1mgithub: [0mskipping check (Docker image), for updates see https://github.com/ultralytics/yolov5
YOLOv5 🚀 v5.0-360-gd9f23ed torch 1.9.0+cu102 CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)
Added key: store_based_barrier_key:1 to store for rank: 0
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 3, for key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)main(opt)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
File "train.py", line 494, in main
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 2, for key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=4, worker_count=12, timeout=0:01:00)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 346) of binary: /opt/conda/bin/python3
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:[default] Worker group FAILED. 1/3 attempts left; will restart worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Stopping worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous'ing worker group
INFO:torch.distributed.elastic.agent.server.api:[default] Rendezvous complete for workers. Result:
restart_count=3
master_addr=127.0.0.1
master_port=29500
group_rank=0
group_world_size=1
local_ranks=[0, 1, 2, 3]
role_ranks=[0, 1, 2, 3]
global_ranks=[0, 1, 2, 3]
role_world_sizes=[4, 4, 4, 4]
global_world_sizes=[4, 4, 4, 4]
INFO:torch.distributed.elastic.agent.server.api:[default] Starting worker group
INFO:torch.distributed.elastic.multiprocessing:Setting worker0 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_3/0/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker1 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_3/1/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker2 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_3/2/error.json
INFO:torch.distributed.elastic.multiprocessing:Setting worker3 reply file to: /tmp/torchelastic_wqpgf9x4/none_u1agcdp1/attempt_3/3/error.json
[34m[1mtrain: [0mweights=yolov5m.pt, cfg=, data=../logos/logo.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=100, batch_size=64, imgsz=1280, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=None, image_weights=False, device=0,1,2,3, multi_scale=False, single_cls=False, adam=False, sync_bn=False, workers=0, project=results, entity=None, name=2021-08-13, exist_ok=True, quad=False, linear_lr=False, label_smoothing=0.0, upload_dataset=False, bbox_interval=-1, save_period=-1, artifact_alias=latest, local_rank=0, freeze=0
[34m[1mgithub: [0mskipping check (Docker image), for updates see https://github.com/ultralytics/yolov5
YOLOv5 🚀 v5.0-360-gd9f23ed torch 1.9.0+cu102 CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)
CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)
Added key: store_based_barrier_key:1 to store for rank: 0
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
Waiting in store based barrier to initialize process group for rank: 0, key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 2, for key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 1, for key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 3, for key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
Traceback (most recent call last):
File "train.py", line 600, in <module>
main(opt)
File "train.py", line 494, in main
dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo", timeout=timedelta(seconds=60))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 547, in init_process_group
_store_based_barrier(rank, store, timeout)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/distributed_c10d.py", line 219, in _store_based_barrier
raise RuntimeError(
RuntimeError: Timed out initializing process group in store based barrier on rank: 0, for key: store_based_barrier_key:1 (world_size=4, worker_count=16, timeout=0:01:00)
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 0 (pid: 390) of binary: /opt/conda/bin/python3
ERROR:torch.distributed.elastic.agent.server.local_elastic_agent:[default] Worker group failed
INFO:torch.distributed.elastic.agent.server.api:Local worker group finished (FAILED). Waiting 300 seconds for other agents to finish
/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/utils/store.py:70: FutureWarning: This is an experimental API and will be changed in future.
warnings.warn(
INFO:torch.distributed.elastic.agent.server.api:Done waiting for other agents. Elapsed: 0.0009250640869140625 seconds
{"name": "torchelastic.worker.status.FAILED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 0, "group_rank": 0, "worker_id": "390", "role": "default", "hostname": "job-155b782d-12d0-457b-ada3-ee678ed0e091", "state": "FAILED", "total_run_time": 1071, "rdzv_backend": "static", "raw_error": "{\"message\": \"<NONE>\"}", "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python3\", \"local_rank\": [0], \"role_rank\": [0], \"role_world_size\": [4]}", "agent_restarts": 3}}
{"name": "torchelastic.worker.status.FAILED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 1, "group_rank": 0, "worker_id": "391", "role": "default", "hostname": "job-155b782d-12d0-457b-ada3-ee678ed0e091", "state": "FAILED", "total_run_time": 1071, "rdzv_backend": "static", "raw_error": "{\"message\": \"<NONE>\"}", "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python3\", \"local_rank\": [1], \"role_rank\": [1], \"role_world_size\": [4]}", "agent_restarts": 3}}
{"name": "torchelastic.worker.status.FAILED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 2, "group_rank": 0, "worker_id": "392", "role": "default", "hostname": "job-155b782d-12d0-457b-ada3-ee678ed0e091", "state": "FAILED", "total_run_time": 1071, "rdzv_backend": "static", "raw_error": "{\"message\": \"<NONE>\"}", "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python3\", \"local_rank\": [2], \"role_rank\": [2], \"role_world_size\": [4]}", "agent_restarts": 3}}
{"name": "torchelastic.worker.status.FAILED", "source": "WORKER", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": 3, "group_rank": 0, "worker_id": "393", "role": "default", "hostname": "job-155b782d-12d0-457b-ada3-ee678ed0e091", "state": "FAILED", "total_run_time": 1071, "rdzv_backend": "static", "raw_error": "{\"message\": \"<NONE>\"}", "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python3\", \"local_rank\": [3], \"role_rank\": [3], \"role_world_size\": [4]}", "agent_restarts": 3}}
{"name": "torchelastic.worker.status.SUCCEEDED", "source": "AGENT", "timestamp": 0, "metadata": {"run_id": "none", "global_rank": null, "group_rank": 0, "worker_id": null, "role": "default", "hostname": "job-155b782d-12d0-457b-ada3-ee678ed0e091", "state": "SUCCEEDED", "total_run_time": 1071, "rdzv_backend": "static", "raw_error": null, "metadata": "{\"group_world_size\": 1, \"entry_point\": \"python3\"}", "agent_restarts": 3}}
/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py:354: UserWarning:
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
**********************************************************************
CHILD PROCESS FAILED WITH NO ERROR_FILE
Child process 390 (local_rank 0) FAILED (exitcode 1)
Error msg: Process failed with exitcode 1
Without writing an error file to <N/A>.
While this DOES NOT affect the correctness of your application,
no trace information about the error will be available for inspection.
Consider decorating your top level entrypoint function with
torch.distributed.elastic.multiprocessing.errors.record. Example:
from torch.distributed.elastic.multiprocessing.errors import record
@record
def trainer_main(args):
# do train
**********************************************************************
warnings.warn(_no_error_file_warning_msg(rank, failure))
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 173, in <module>
main()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 169, in main
run(args)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 621, in run
elastic_launch(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 116, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 348, in wrapper
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
return f(*args, **kwargs)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 245, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
***************************************
train.py FAILED
=======================================
Root Cause:
[0]:
time: 2021-08-13_15:41:42
rank: 0 (local_rank: 0)
exitcode: 1 (pid: 390)
error_file: <N/A>
msg: "Process failed with exitcode 1"
=======================================
Other Failures:
[1]:
time: 2021-08-13_15:41:42
rank: 1 (local_rank: 1)
exitcode: 1 (pid: 391)
error_file: <N/A>
msg: "Process failed with exitcode 1"
[2]:
time: 2021-08-13_15:41:42
rank: 2 (local_rank: 2)
exitcode: 1 (pid: 392)
error_file: <N/A>
msg: "Process failed with exitcode 1"
[3]:
time: 2021-08-13_15:41:42
rank: 3 (local_rank: 3)
exitcode: 1 (pid: 393)
error_file: <N/A>
msg: "Process failed with exitcode 1"
***************************************
```
## Expected behavior
The training should keep going after the first epoch is over and the first validation step is over.
## Environment
I'm using OVH AI Cloud to train. Using the Docker image described above (basically the official one).
- Yolo v5.0-360-gd9f23ed torch 1.9.0+cu102
- CUDA:0 (Tesla V100S-PCIE-32GB, 32510.5MB)
- CUDA:1 (Tesla V100S-PCIE-32GB, 32510.5MB)
- CUDA:2 (Tesla V100S-PCIE-32GB, 32510.5MB)
- CUDA:3 (Tesla V100S-PCIE-32GB, 32510.5MB)
Ressources for the job:
Cpu: 13
Memory: 40.0 GiB
Public Network: 1.5 Gbps
Private Network: 0 bps
Ephemeral Storage: 650.0 GiB
Gpu Model: Tesla-V100S
Gpu Brand: NVIDIA
Gpu Memory: 32.0 GiB
Flavor: ai1-1-gpu
## Additional context
I didn't encounter this problem with only 500 images a few days ago, with 4 GPUs. I encountered multiple problems today due to the `cache` argument being used but not that is gone, I can't find the reason why it's failing at the end of the first validation step (around 900 images). | null | https://github.com/ultralytics/yolov5/pull/4422 | null | {'base_commit': '4e65052f28b1184b9d463c1e44b3a79b95113904', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'main', 461)": {'mod': [496]}}}, {'path': 'utils/torch_utils.py', 'status': 'modified', 'Loc': {"(None, 'torch_distributed_zero_first', 33)": {'mod': [38, 41]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"train.py",
"utils/torch_utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | 554f782537b9af336c02c013468b78fe16ce092d | https://github.com/ultralytics/yolov5/issues/5916 | enhancement | onnxruntime-gpu 1.10 | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar feature requests.
### Description
Using onnxruntime-gpu 1.10, the following error will occur.
```
raise ValueError("This ORT build has {} enabled. ".format(available_providers) +
ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)
```
### Use case
onnxruntime-gpu 1.10 requires providers
```
elif onnx: # ONNX Runtime
LOGGER.info(f'Loading {w} for ONNX Runtime inference...')
check_requirements(('onnx', 'onnxruntime-gpu' if torch.cuda.is_available() else 'onnxruntime'))
import onnxruntime
if torch.cuda.is_available():
session = onnxruntime.InferenceSession(w, None, providers=["CUDAExecutionProvider"])
else:
session = onnxruntime.InferenceSession(w, None)
```
### Additional
_No response_
### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | null | https://github.com/ultralytics/yolov5/pull/5918 | null | {'base_commit': '554f782537b9af336c02c013468b78fe16ce092d', 'files': [{'path': 'models/common.py', 'status': 'modified', 'Loc': {"('DetectMultiBackend', '__init__', 279)": {'mod': [323, 325]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"models/common.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | b510957650c890dee876146c43dcda1fdfc279d6 | https://github.com/ultralytics/yolov5/issues/8641 | bug
TODO | Albumentations-Pipeline is applied to BGR not to RGB | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
_No response_
### Bug
As written [here ](https://albumentations.ai/docs/getting_started/image_augmentation/) in step 3, Albumentations internally uses the RGB format and not the BGR format of opencv. However, the data is currently passed internally as BGR:
https://github.com/ultralytics/yolov5/blob/92e47b85d952274480c8c5efa5900e686241a96b/utils/dataloaders.py#L626-L628
https://github.com/ultralytics/yolov5/blob/92e47b85d952274480c8c5efa5900e686241a96b/utils/dataloaders.py#L654
Or am I missing something?
### Environment
YOLOv5 torch 1.11 (cuda 11.3) and 1.12 (cuda 11.6)
### Minimal Reproducible Example
_No response_
### Additional
_No response_
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | null | https://github.com/ultralytics/yolov5/pull/8695 | null | {'base_commit': 'b367860196a2590a5f44c9b18401dedfc0543077', 'files': [{'path': 'utils/augmentations.py', 'status': 'modified', 'Loc': {"('Albumentations', '__call__', 40)": {'mod': [42, 43]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"utils/augmentations.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | 0b6266f5e0eab11218871d5560bf9b93f7547aac | https://github.com/ultralytics/yolov5/issues/1816 | question | time_synchronized() when using CPU for inference on a GPU enabled workstation? | Im trying to measure time taken for inference using a CPU vs GPU.
I set --device as cpu when i run detect.py but the method time_synchronized() checks torch.cuda.is_available() which obviously is True as the gpu is available but not used.
Also i've noticed that when i comment `torch.cuda.synchronize() if torch.cuda.is_available() else None` (in time_synchronized() method)while using --device as cpu the inference speeds up.
Shouldn't the time_synchronize() be connected to the --device parameter?
| null | https://github.com/ultralytics/yolov5/pull/1826 | null | {'base_commit': '0b6266f5e0eab11218871d5560bf9b93f7547aac', 'files': [{'path': 'utils/torch_utils.py', 'status': 'modified', 'Loc': {"(None, 'init_torch_seeds', 35)": {'mod': [39, 40, 42, 43]}, "(None, 'select_device', 46)": {'mod': [48, 49, 51, 53, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 66, 68]}, "(None, 'time_synchronized', 72)": {'mod': [74]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"utils/torch_utils.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | e2b7bc0b32ecf306fc179bb87bad82216a470b37 | https://github.com/ultralytics/yolov5/issues/1945 | bug
Stale | CoreML export failure: unexpected number of inputs for node x.2 (_convolution): 13 |
## Additional context
issue occur:
Converting Frontend ==> MIL Ops: 0%| | 0/970 [00:00<?, ? ops/s]Converting op 221 : constant
Adding op '221' of type const
Converting op 222 : constant
Adding op '222' of type const
Converting op 223 : constant
Adding op '223' of type const
Converting op 224 : constant
Adding op '224' of type const
Converting op 225 : constant
Converting op 226 : constant
Adding op '226' of type const
Converting op 227 : constant
Adding op '227' of type const
Converting op 228 : slice
Adding op '228' of type slice_by_index
Adding op '228_begin_0' of type const
Adding op '228_end_0' of type const
Adding op '228_stride_0' of type const
Adding op '228_end_mask_0' of type const
Converting op 229 : slice
Adding op '229' of type slice_by_index
Adding op '229_begin_0' of type const
Adding op '229_end_0' of type const
Adding op '229_stride_0' of type const
Adding op '229_end_mask_0' of type const
Converting op 230 : slice
Adding op '230' of type slice_by_index
Adding op '230_begin_0' of type const
Adding op '230_end_0' of type const
Adding op '230_stride_0' of type const
Adding op '230_end_mask_0' of type const
Converting op 231 : slice
Adding op '231' of type slice_by_index
Adding op '231_begin_0' of type const
Adding op '231_end_0' of type const
Adding op '231_stride_0' of type const
Adding op '231_end_mask_0' of type const
Converting op 232 : slice
Adding op '232' of type slice_by_index
Adding op '232_begin_0' of type const
Adding op '232_end_0' of type const
Adding op '232_stride_0' of type const
Adding op '232_end_mask_0' of type const
Converting op 233 : slice
Adding op '233' of type slice_by_index
Adding op '233_begin_0' of type const
Adding op '233_end_0' of type const
Adding op '233_stride_0' of type const
Adding op '233_end_mask_0' of type const
Converting op 234 : slice
Adding op '234' of type slice_by_index
Adding op '234_begin_0' of type const
Adding op '234_end_0' of type const
Adding op '234_stride_0' of type const
Adding op '234_end_mask_0' of type const
Converting op 235 : slice
Adding op '235' of type slice_by_index
Adding op '235_begin_0' of type const
Adding op '235_end_0' of type const
Adding op '235_stride_0' of type const
Adding op '235_end_mask_0' of type const
Converting op 236 : listconstruct
Converting op input.1 : cat
Adding op 'input.1' of type concat
Adding op 'input.1_interleave_0' of type const
Converting op 238 : listconstruct
Adding op '238' of type const
Converting op 239 : listconstruct
Adding op '239' of type const
Converting op 240 : listconstruct
Adding op '240' of type const
Converting op 241 : listconstruct
Adding op '241' of type const
Converting op x.2 : _convolution
Converting Frontend ==> MIL Ops: 2%|█ | 21/970 [00:00<00:00, 1017.80 ops/s]
CoreML export failure: unexpected number of inputs for node x.2 (_convolution): 13
Export complete (11.10s). Visualize with https://github.com/lutzroeder/netron
when I use command: python models/export.py --weights "yolov5l.pt" --img 640 --batch 1
I see #1667
my torch = 1.7.1 and torchvision= 0.8.2 torchaudio=0.7.2 coremltools=4.0
what wrong happen? | null | https://github.com/ultralytics/yolov5/pull/2762 | null | {'base_commit': 'e2b7bc0b32ecf306fc179bb87bad82216a470b37', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14, 16], 'mod': [9, 20, 21, 26, 27, 28, 29, 30, 31, 32, 33, 35, 36, 37, 38, 47, 88]}}}, {'path': 'hubconf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [70]}, "(None, 'yolov5s', 58)": {'mod': [58, 59, 61, 62, 63, 64, 69]}, "(None, 'yolov5m', 72)": {'mod': [72, 73, 75, 76, 77, 78, 80, 81, 82]}, "(None, 'yolov5l', 86)": {'mod': [87, 89, 90, 91, 92, 94, 95, 96]}, "(None, 'yolov5x', 100)": {'mod': [101, 103, 104, 105, 106, 108, 109, 110, 111]}, "(None, 'custom', 114)": {'mod': [114, 115, 117, 118, 119, 120, 122, 123, 124, 125, 126, 127, 129, 130, 131, 132, 133, 134, 135]}}}, {'path': 'utils/plots.py', 'status': 'modified', 'Loc': {"(None, 'plot_study_txt', 240)": {'mod': [246, 256, 264]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"hubconf.py",
"utils/plots.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
ultralytics | yolov5 | fd1679975bf55325f606631b28d5d3feb47fbda5 | https://github.com/ultralytics/yolov5/issues/2332 | question | Label smoothing in training option | Hi, I could not find any questions about label smoothing, so I wonder if there is `label smoothing` option in the training script?
I think it would be useful, as the authors (from [this](https://arxiv.org/pdf/1902.04103.pdf) paper) demonstrated the performance boost.

| null | https://github.com/ultralytics/yolov5/pull/2344 | null | {'base_commit': 'fd1679975bf55325f606631b28d5d3feb47fbda5', 'files': [{'path': 'train.py', 'status': 'modified', 'Loc': {"(None, 'train', 41)": {'add': [226]}, '(None, None, None)': {'add': [483]}}}, {'path': 'utils/loss.py', 'status': 'modified', 'Loc': {"('ComputeLoss', '__init__', 90)": {'mod': [100]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"train.py",
"utils/loss.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
CorentinJ | Real-Time-Voice-Cloning | 1e1687743a0c2b1f8027076ffc3651a61bbc8b66 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/94 | Toolbox & sounddevice: Invalid sample rate on playback, microphone not recognized | I've tried using the toolbox and using the play button throws the following exception on Arch Linux with PulseAudio
```
sounddevice.PortAudioError: Error opening OutputStream: Invalid sample rate [PaErrorCode -9997]
Traceback (most recent call last):
File "/home/dash/programs/Real-Time-Voice-Cloning/toolbox/__init__.py", line 81, in <lambda>
func = lambda: self.ui.play(self.ui.selected_utterance.wav, Synthesizer.sample_rate)
File "/home/dash/programs/Real-Time-Voice-Cloning/toolbox/ui.py", line 142, in play
sd.play(wav, sample_rate)
File "/usr/lib/python3.7/site-packages/sounddevice.py", line 154, in play
**kwargs)
File "/usr/lib/python3.7/site-packages/sounddevice.py", line 2417, in start_stream
**kwargs)
File "/usr/lib/python3.7/site-packages/sounddevice.py", line 1374, in __init__
**_remove_self(locals()))
File "/usr/lib/python3.7/site-packages/sounddevice.py", line 780, in __init__
'Error opening {0}'.format(self.__class__.__name__))
File "/usr/lib/python3.7/site-packages/sounddevice.py", line 2572, in _check
raise PortAudioError(errormsg, err)
sounddevice.PortAudioError: Error opening OutputStream: Invalid sample rate [PaErrorCode -9997]
```
Using the recording function throws a similar exception:
`Error opening InputStream: Invalid sample rate [PaErrorCode -9997]` | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/390 | null | {'base_commit': '1e1687743a0c2b1f8027076ffc3651a61bbc8b66', 'files': [{'path': 'toolbox/__init__.py', 'status': 'modified', 'Loc': {"('Toolbox', 'setup_events', 57)": {'add': [85]}}}, {'path': 'toolbox/ui.py', 'status': 'modified', 'Loc': {"('UI', 'draw_umap_projections', 98)": {'add': [138]}, "('UI', None, 52)": {'add': [139]}, '(None, None, None)': {'mod': [16]}, "('UI', 'record_one', 147)": {'mod': [168]}, "('UI', '__init__', 342)": {'mod': [429]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"toolbox/__init__.py",
"toolbox/ui.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
CorentinJ | Real-Time-Voice-Cloning | 5d6d9ff499912c32a331f3bb5ed9e1b77db4c7e6 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/303 | Tensorflow import error in the Google Colab notebook | When installing the requirements with pip, I get the following errors which causes tensorflow to not be installed.
`ERROR: tensorflow 2.2.0rc1 has requirement tensorboard<2.2.0,>=2.1.0, but you'll have tensorboard 1.14.0 which is incompatible.`
`ERROR: tensorflow 2.2.0rc1 has requirement tensorflow-estimator<2.3.0,>=2.2.0rc0, but you'll have tensorflow-estimator 1.14.0 which is incompatible.` | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/366 | null | {'base_commit': '5d6d9ff499912c32a331f3bb5ed9e1b77db4c7e6', 'files': [{'path': 'demo_cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 32], 'mod': [41, 42, 44, 45, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 175, 177, 178, 180]}}}, {'path': 'encoder/inference.py', 'status': 'modified', 'Loc': {"(None, 'load_model', 15)": {'mod': [33]}}}, {'path': 'encoder/train.py', 'status': 'modified', 'Loc': {"(None, 'sync', 9)": {'add': [14], 'mod': [10, 11]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}, {'path': 'synthesizer/feeder.py', 'status': 'modified', 'Loc': {"('Feeder', '__init__', 17)": {'mod': [73, 74, 75, 77, 78, 79, 83, 88, 103]}}}, {'path': 'synthesizer/inference.py', 'status': 'modified', 'Loc': {"('Synthesizer', 'load', 50)": {'mod': [57]}, "('Synthesizer', '_one_shot_synthesize_spectrograms', 89)": {'mod': [91]}}}, {'path': 'synthesizer/models/attention.py', 'status': 'modified', 'Loc': {"(None, '_location_sensitive_score', 38)": {'mod': [63, 66]}, "('LocationSensitiveAttention', '__init__', 111)": {'mod': [158, 161]}}}, {'path': 'synthesizer/models/helpers.py', 'status': 'modified', 'Loc': {"('TacoTrainingHelper', 'next_inputs', 115)": {'mod': [122]}}}, {'path': 'synthesizer/models/modules.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}, "('HighwayNet', '__init__', 5)": {'mod': [9, 10]}, "('HighwayNet', '__call__', 13)": {'mod': [14]}, "('CBHG', '__call__', 40)": {'mod': [41, 42, 74]}, "('ZoneoutLSTMCell', None, 91)": {'mod': [91]}, "('ZoneoutLSTMCell', '__init__', 102)": {'mod': [112]}, "('ZoneoutLSTMCell', '__call__', 126)": {'mod': [147, 148, 149, 150, 156]}, "('EncoderConvolutions', '__call__', 186)": {'mod': [187]}, "('EncoderRNN', '__call__', 228)": {'mod': [229, 230]}, "('Prenet', '__call__', 263)": {'mod': [266, 268, 272]}, "('DecoderRNN', '__init__', 281)": {'mod': [305]}, "('DecoderRNN', '__call__', 307)": {'mod': [308]}, "('FrameProjection', '__init__', 316)": {'mod': [330]}, "('FrameProjection', '__call__', 333)": {'mod': [334, 337]}, "('StopProjection', '__call__', 364)": {'mod': [365]}, "('Postnet', '__call__', 401)": {'mod': [402]}, "(None, 'conv1d', 414)": {'mod': [415, 416, 422, 424]}}}, {'path': 'synthesizer/models/tacotron.py', 'status': 'modified', 'Loc': {"('Tacotron', 'initialize', 31)": {'mod': [86, 87, 88, 89, 90, 123, 124, 125, 135, 286]}, "('Tacotron', 'add_loss', 312)": {'mod': [334, 335, 336, 359, 360, 362, 363]}, "('Tacotron', 'add_optimizer', 427)": {'mod': [442, 451, 452, 457, 458, 460, 493]}, "('Tacotron', '_learning_rate_decay', 497)": {'mod': [513, 514, 515, 516, 517, 518]}}}, {'path': 'synthesizer/tacotron2.py', 'status': 'modified', 'Loc': {"('Tacotron2', '__init__', 12)": {'mod': [15, 16, 17, 19, 20, 21, 55, 59, 60, 62]}}}, {'path': 'synthesizer/train.py', 'status': 'modified', 'Loc': {"(None, 'add_train_stats', 35)": {'mod': [36, 38, 39, 40, 41, 44, 46, 47, 49, 50, 51, 52, 54, 56, 57, 58, 60]}, "(None, 'add_eval_stats', 63)": {'mod': [66, 67, 68, 69, 70, 71, 72, 75, 76, 77]}, "(None, 'model_train_mode', 85)": {'mod': [86]}, "(None, 'model_test_mode', 98)": {'mod': [99]}, "(None, 'train', 110)": {'mod': [139, 143, 167, 172, 177, 179, 181]}}}, {'path': 'vocoder/inference.py', 'status': 'modified', 'Loc': {"(None, 'load_model', 8)": {'mod': [9, 26, 30]}}}, {'path': 'vocoder/models/fatchord_version.py', 'status': 'modified', 'Loc': {"('WaveRNN', 'generate', 149)": {'mod': [160, 171, 172, 173]}, "('WaveRNN', 'pad_tensor', 258)": {'mod': [263]}, "('WaveRNN', 'fold_with_overlap', 270)": {'mod': [309]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"encoder/inference.py",
"synthesizer/inference.py",
"synthesizer/models/modules.py",
"vocoder/inference.py",
"synthesizer/models/tacotron.py",
"synthesizer/train.py",
"synthesizer/models/attention.py",
"vocoder/models/fatchord_version.py",
"synthesizer/feeder.py",
"synthesizer/models/helpers.py",
"synthesizer/tacotron2.py",
"demo_cli.py",
"encoder/train.py"
],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | 1 | |
CorentinJ | Real-Time-Voice-Cloning | 1b8d2e794b32039aa7ecc6367dabb64a3e5e6467 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/89 | Charmap codec can't encode | Hi,
I'm running the cli demo and I got all the way to "Write a sentence.." and I get this error. Could you please help? I've been trying to get this to work for me since 8/8 and working through many setbacks...I'm finally close.
`
Created the mel spectrogram
Synthesizing the waveform:
Traceback (most recent call last):
File "demo_cli.py", line 161, in <module>
generated_wav = vocoder.infer_waveform(spec)
File "C:\Users\selinakvle\Real-Time-Voice-Cloning\vocoder\inference.py", line 57, in infer_waveform
wav = _model.generate(mel, batched, target, overlap, hp.mu_law, progress_callback)
File "C:\Users\selinakvle\Real-Time-Voice-Cloning\vocoder\models\fatchord_version.py", line 219, in generate
progress_callback(i, seq_len, b_size, gen_rate)
File "C:\Users\selinakvle\Real-Time-Voice-Cloning\vocoder\models\fatchord_version.py", line 248, in gen_display
stream(msg)
File "C:\Users\selinakvle\Real-Time-Voice-Cloning\vocoder\display.py", line 16, in stream
sys.stdout.write("\r{%s}" % message)
File "C:\Users\selinakvle\AppData\Local\Programs\Python\Python36\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 4-19: character maps to <undefined>
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "demo_cli.py", line 184, in <module>
print("Caught exception: %s" % repr(e))
File "C:\Users\selinakvle\AppData\Local\Programs\Python\Python36\lib\encodings\cp1252.py", line 19, in encode
return codecs.charmap_encode(input,self.errors,encoding_table)[0]
UnicodeEncodeError: 'charmap' codec can't encode characters in position 54-69: character maps to <undefined>
` | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/372 | null | {'base_commit': '1b8d2e794b32039aa7ecc6367dabb64a3e5e6467', 'files': [{'path': 'vocoder/display.py', 'status': 'modified', 'Loc': {"(None, 'stream', 15)": {'mod': [16]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"vocoder/display.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
CorentinJ | Real-Time-Voice-Cloning | eaf5ec4467795344e7d9601515b017fd8c46e44b | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/413 | enhancement
help wanted | Updates for synthesizer training using LibriTTS | I am certain someone has done this before (such as @sberryman in #126). Would someone please share the code modifications needed to train the synthesizer on LibriTTS?
If we can improve the [training process](https://github.com/CorentinJ/Real-Time-Voice-Cloning/wiki/Training) to use LibriTTS in place of LibriSpeech, we can also generate a new set of pretrained models for better output quality.
Here are some questions to get it started... but feel free to skip ahead and share finished code if it's already available.
* Can `preprocess_librispeech` be reused for TTS? See [synthesizer/preprocess.py](https://github.com/CorentinJ/Real-Time-Voice-Cloning/blob/master/synthesizer/preprocess.py#L13)
* Are LibriTTS alignments available? I see [LibriTTSLabel](https://github.com/kan-bayashi/LibriTTSLabel) and [Montreal-Forced-Aligner](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner). But not sure what else is needed to get it in a form that the RTVC repo can use. | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/441 | null | {'base_commit': 'eaf5ec4467795344e7d9601515b017fd8c46e44b', 'files': [{'path': 'synthesizer/preprocess.py', 'status': 'modified', 'Loc': {"(None, 'preprocess_librispeech', 13)": {'mod': [13, 14, 16, 17, 18, 33, 35]}, "(None, 'preprocess_speaker', 54)": {'mod': [54, 57, 58, 59, 60, 61, 62, 63, 64, 66, 67, 68, 69, 70, 71, 73, 74, 75, 76, 77, 78]}}}, {'path': 'synthesizer_preprocess_audio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [28], 'mod': [1, 52]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"synthesizer/preprocess.py",
"synthesizer_preprocess_audio.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
CorentinJ | Real-Time-Voice-Cloning | 1b8d2e794b32039aa7ecc6367dabb64a3e5e6467 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/235 | I keep getting TypeError: Invalid file: WindowsPath | Hi guys I keep getting this error when running python demo_toolbox.py
Exception: Invalid file: WindowsPath('D:/ai/LibriSpeech/train-clean-360/6157/40556/6157-40556-0111.flac')
Also I get this error using eg. python synthesizer_preprocess_audio.py
TypeError: Invalid file: WindowsPath('D:/ai/LibriSpeech/train-clean-100/103/1240/103-1240-0000.flac')
Any help with this would be fantastic it may be something simple I have only just started with python a few days ago.
I am running Windows 10 and using Anaconda and have downloaded all the files required. I just cant seem to load any voices in the toolbox through Voxceleb, librispeech or custom audio files in any format but i can record my own voice in toolbox. Thanks guys hopefully someone can help me out.
Cheers
Glenn
| null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/371 | null | {'base_commit': '1b8d2e794b32039aa7ecc6367dabb64a3e5e6467', 'files': [{'path': 'demo_cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [138]}}}, {'path': 'encoder/audio.py', 'status': 'modified', 'Loc': {"(None, 'preprocess_wav', 13)": {'mod': [28]}}}, {'path': 'synthesizer/inference.py', 'status': 'modified', 'Loc': {"('Synthesizer', 'load_preprocess_wav', 106)": {'mod': [111]}}}, {'path': 'synthesizer/preprocess.py', 'status': 'modified', 'Loc': {"(None, 'split_on_silences', 83)": {'mod': [85]}}}, {'path': 'vocoder/audio.py', 'status': 'modified', 'Loc': {"(None, 'load_wav', 18)": {'mod': [19]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"synthesizer/inference.py",
"encoder/audio.py",
"synthesizer/preprocess.py",
"vocoder/audio.py",
"demo_cli.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
CorentinJ | Real-Time-Voice-Cloning | 070a3c187f87136ebe92aa72766f8343772d414e | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/375 | Make webrtcvad optional for inference | > Second thing: webrtcvad. That package is hell to install on windows. There are alternatives for noise removal out there. There's also the possibility of not using it at all, but for both LibriSpeech and LibriTTS I would recommend it.
Propose making webrtcvad completely optional for running demo_cli.py. This would make it a lot easier for Windows users who just want to try cloning a voice with the pretrained models. It would continue to be used when preprocessing audio files for training.
An optional import of webrtcvad could be done using something like this: [https://stackoverflow.com/a/52826085](https://stackoverflow.com/a/52826085) | null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/376 | null | {'base_commit': '070a3c187f87136ebe92aa72766f8343772d414e', 'files': [{'path': 'encoder/audio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4, 9], 'mod': [6]}, "(None, 'preprocess_wav', 13)": {'mod': [38]}}}, {'path': 'encoder_preprocess.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [39, 41]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8]}}}, {'path': 'synthesizer_preprocess_audio.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [26, 36]}}}, {'path': 'vocoder_preprocess.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [30, 39]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"vocoder_preprocess.py",
"synthesizer_preprocess_audio.py",
"encoder_preprocess.py",
"encoder/audio.py"
],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | 1 | |
CorentinJ | Real-Time-Voice-Cloning | 6944770f678f0545ef503efd6ec87ac65db0a016 | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/395 | Can't load voice in | Hey guys, whenever I try to load my voice sample, I keep getting either just
`Exception:`
or
`Exception: expected str, bytes or os.PathLike object, not Nonetype`
Please help!
| null | https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/414 | null | {'base_commit': '6944770f678f0545ef503efd6ec87ac65db0a016', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [65], 'mod': [34, 36, 37, 39, 41, 43, 45, 48, 55, 58]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
AUTOMATIC1111 | stable-diffusion-webui | ae6b30907db2060962c533de79ab4bd2c6b12297 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/7021 | bug-report | [Bug]: Inpainting color correctioin | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
Even wiith color correction turned on in the settings, while inpainting, the final render still ends up with a bluish color on human subjects in the area that was in painted
### Steps to reproduce the problem
1. Go to ....
2. Press ....
3. ...
### What should have happened?
less bluing and more of a skintone match
### Commit where the problem happens
e33cace2c2074ef342d027c1f31ffc4b3c3e877e
### What platforms do you use to access UI ?
Windows
### What browsers do you use to access the UI ?
Mozilla Firefox
### Command Line Arguments
```Shell
--xformers
```
### Additional information, context and logs
_No response_ | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12480 | null | {'base_commit': 'ae6b30907db2060962c533de79ab4bd2c6b12297', 'files': [{'path': 'modules/processing.py', 'status': 'modified', 'Loc': {"(None, 'apply_color_correction', 47)": {'mod': [60]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"modules/processing.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AUTOMATIC1111 | stable-diffusion-webui | 9d5becb4decb27683af749058f61e40842fe9c93 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/1364 | bug-report | LDSR: UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte | **Describe the bug**
After today's refactoring commits, using LDSR upscaling produces an error:
`UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte`
This is on Linux, even after a fresh download (I moved the old LDSR related models aside). It looks like an issue with encodings, as utf-8 is involved. I guess it could even possibly work on Windows but not on Linux?
**To Reproduce**
Steps to reproduce the behavior:
1. Go to extras
2. Click on LDSR
3. Add an image
4. Click Generate
**Expected behavior**
LDSR should work
**Desktop (please complete the following information):**
- OS: Fedora Linux 37 beta
- Browser: Firefox
- Commit revision: 5c0c778a65c8f89a85395fb10e32d3b35ea57196
**Additional context**
It works in git commit 498515e7a19bb3e8ab36aab2e628eb6be7464401 (a commit from last night, before all the refactoring). Well, "works". Sometimes there's a black edge with missing pixels on the right and bottom. Other times, it's fine. (I think it's related to resolution and/or aspect ratio?)
Complete traceback:
```python
Traceback (most recent call last):
File "/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/ui.py", line 153, in f
res = list(func(*args, **kwargs))
File "/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/webui.py", line 63, in f
res = func(*args, **kwargs)
File "/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/extras.py", line 85, in run_extras
res = upscale(image, extras_upscaler_1, upscaling_resize)
File "/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/extras.py", line 79, in upscale
c = upscaler.scaler.upscale(image, resize, upscaler.data_path)
File "/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/upscaler.py", line 61, in upscale
img = self.do_upscale(img, selected_model)
File "/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/ldsr_model.py", line 45, in do_upscale
return ldsr.super_resolution(img, ddim_steps, self.scale)
File "/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/ldsr_model_arch.py", line 87, in super_resolution
model = self.load_model_from_config(half_attention)
File "/var/home/garrett/Source/stable-diffusion/stable-diffusion-webui-auto/modules/ldsr_model_arch.py", line 24, in load_model_from_config
config = OmegaConf.load(self.yamlPath)
File "/var/home/garrett/.local/lib/python3.10/site-packages/omegaconf/omegaconf.py", line 188, in load
obj = yaml.load(f, Loader=get_yaml_loader())
File "/var/home/garrett/.local/lib/python3.10/site-packages/yaml/__init__.py", line 79, in load
loader = Loader(stream)
File "/var/home/garrett/.local/lib/python3.10/site-packages/yaml/loader.py", line 34, in __init__
Reader.__init__(self, stream)
File "/var/home/garrett/.local/lib/python3.10/site-packages/yaml/reader.py", line 85, in __init__
self.determine_encoding()
File "/var/home/garrett/.local/lib/python3.10/site-packages/yaml/reader.py", line 124, in determine_encoding
self.update_raw()
File "/var/home/garrett/.local/lib/python3.10/site-packages/yaml/reader.py", line 178, in update_raw
data = self.stream.read(size)
File "/usr/lib64/python3.10/codecs.py", line 322, in decode
(result, consumed) = self._buffer_decode(data, self.errors, final)
UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 64: invalid start byte
``` | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/1371 | null | {'base_commit': '2b03f0bbda1229dff6e7ab6f656b28587eba8308', 'files': [{'path': 'modules/bsrgan_model.py', 'status': 'modified', 'Loc': {"('UpscalerBSRGAN', 'load_model', 63)": {'mod': [72]}}}, {'path': 'modules/ldsr_model.py', 'status': 'modified', 'Loc': {"('UpscalerLDSR', None, 13)": {'add': [24]}, "('UpscalerLDSR', 'load_model', 24)": {'mod': [26]}, "('UpscalerLDSR', 'do_upscale', 38)": {'mod': [44]}}}, {'path': 'modules/ldsr_model_arch.py', 'status': 'modified', 'Loc': {"('LDSR', 'super_resolution', 86)": {'mod': [101, 103, 114]}}}, {'path': 'modules/modelloader.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "(None, 'load_models', 13)": {'mod': [44]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"modules/bsrgan_model.py",
"modules/modelloader.py",
"modules/ldsr_model.py",
"modules/ldsr_model_arch.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AUTOMATIC1111 | stable-diffusion-webui | f92d61497a426a19818625c3ccdaae9beeb82b31 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/14024 | enhancement | [Feature Request]: Img2Img inpainting/sketching - Non-binary/alpha weighted denoising mask | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
#### Problem to solve
It appears that the denoiser only considers a binary mask (with a hard boundary) with respect to what pixels should be denoised, even with extreme blurring values. Specifically, only if the mask/sketch opacity is greater than 50% does the region under that pixel get denoised. The resulting image and the original image are simply alpha-blended together using the mask opacity values.
#### Why this is a problem
- When inpainting, even with a very high mask blur, a seam will appear at the 50% opacity threshold.
- When inpaint-sketching, with any amount of mask blur, the colors of the sketch will bleed into regions of the image that do not recieve denoising. (Without mask blur the results are full of seams.)
- Inpaint sketching with 50% mask transparency or more is pointless as nothing is inpainted.
- It is difficult to inpaint objects with indefinite boundaries like dust clouds, or in any situation where some kind of gradual seamless transition in texture is needed. In these cases, the original texture is destroyed when it should be partially preserved.
#### What possibilities solving it brings
- Brushes with feathered edges
- Compositing images with alpha channels
- Depth-related effects if the mask represents a depth map
#### Proposed solution
**Interpret the mask opacity as a per-pixel multiplier for the denoising strength.**
AFAIK there are a few ways one could achieve this effect:
- Perhaps existing models support this implicitly - when any part of the pipeline (noising and denoising) considers the denoising strength parameter, have it examine a denoising value assigned to each 8x8 block of pixels (instead of a single global parameter). E.g. scale the amount of latent noise added, and scale the change to the latent block created by the denoiser at each iteration.
- Modify the latent image before and after noising steps - The initial noise that is added to the latent image can be scaled according to each 8x8 block's denoising strength. Then after each step, "pull" each 8x8 block's latent vector back to what it was originally. The amount it gets pulled back depends on the denoising strength of that block.
I believe either of these would allow inpainting objects with partial opacity or very gradual transitions, where content in a transition region is preserved.
##### Alternate solution: dithering
A simpler option could be to use dithering to decide whether a given pixel/block is masked. In other words, using some kind of dithering pattern (Beyer, blue noise, Floyd–Steinberg) the mask opacity represents a probability a given element of the image is affected by the denoiser.
##### Alternate solution: adjust mask threshold
An even simpler solution could be to change the mask opacity threshold at which denoising occurs from >=50% to >0%. In other words, if the mask has opacity greater than 0, it is included in the denoising.
Then, the original content could be blended over-top to completely hide the seam at the point where the mask has 0 opacity.
However, the main drawback is that ghosting artifacts will appear where both the original and modified image are visible. (Though this is an issue with the current implementation anyway.)
### Proposed workflow
1. Open Img2Img -> inpaint/inpaint sketch, load an image
2. Select a brush with options for opacity, force/flow and softness. (Mask blur and transparency may be made obsolete by this feature.)
3. (Optional) Tweak the alpha power slider. Repeated iterations may cause partially masked latent blocks to still have strong modifications, pushing the transition zone to regions with almost no masking. Bringing the mask opacity to a power could help make the transitions more perceptually gradual.
4. When ready, regenerate the image to observe no seams, gradual transitions and partial preservation of partially masked content, and no color leakage from blurred/soft sketch strokes. | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/14208 | null | {'base_commit': 'f92d61497a426a19818625c3ccdaae9beeb82b31', 'files': [{'path': 'modules/images.py', 'status': 'modified', 'Loc': {}}, {'path': 'modules/processing.py', 'status': 'modified', 'Loc': {"(None, 'process_images_inner', 750)": {'add': [869, 924], 'mod': [927, 931, 941, 943, 950]}, "('StableDiffusionProcessingImg2Img', None, 1345)": {'add': [1353]}, "(None, 'apply_overlay', 65)": {'mod': [65, 66, 67, 69, 72, 73, 74, 75, 76]}, "(None, 'create_binary_mask', 84)": {'mod': [84, 86]}, "('StableDiffusionProcessing', None, 116)": {'mod': [311, 348]}, "('StableDiffusionProcessing', 'inpainting_image_conditioning', 311)": {'mod': [323, 324]}, "('StableDiffusionProcessing', 'img2img_image_conditioning', 348)": {'mod': [360]}, "('StableDiffusionProcessingImg2Img', 'init', 1388)": {'mod': [1399, 1506, 1518]}, "('StableDiffusionProcessingImg2Img', 'sample', 1520)": {'mod': [1530]}}}, {'path': 'modules/scripts.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13, 18]}, "('Script', None, 30)": {'add': [208, 215]}, "('ScriptRunner', None, 496)": {'add': [769, 777]}}}, {'path': 'modules/sd_samplers_cfg_denoiser.py', 'status': 'modified', 'Loc': {"('CFGDenoiser', '__init__', 41)": {'add': [58]}, "('CFGDenoiser', 'forward', 91)": {'add': [107, 209], 'mod': [109, 211]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"modules/sd_samplers_cfg_denoiser.py",
"modules/processing.py",
"modules/images.py",
"modules/scripts.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AUTOMATIC1111 | stable-diffusion-webui | 09c1be96748584b08b6299024bb7b64bafb09d09 | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/12139 | enhancement | [Feature Request]: command-line argument to disable extensions | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What would your feature do ?
New command-line option to disable all extensions. This would make it easier to troubleshoot during upgrades or development. It would also be quicker than starting the UI, clicking the disable extensions option within the extensions tab, and then restarting. And sometimes an extension might prevent the UI from even starting, making that impossible anyway. When this flag is set at runtime, that should override the similar feature within the Extensions tab, to indicate that it's not possible to run extensions in this mode. I would suggest graying out or otherwise indicate in the extension tab that we are running in no-extensions mode.
Suggested command-line argument name: "--disable-all-extensions" to align with "--update-all-extensions".
### Proposed workflow
1. Add option --disable-all-extensions to launch script
2. Start webui
3. No extensions will be loaded
### Additional information
_No response_ | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/12294 | null | {'base_commit': '09c1be96748584b08b6299024bb7b64bafb09d09', 'files': [{'path': 'modules/cmd_args.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [113]}}}, {'path': 'modules/extensions.py', 'status': 'modified', 'Loc': {"(None, 'list_extensions', 138)": {'add': [145], 'mod': [144]}, "(None, 'active', 13)": {'mod': [14, 16]}}}, {'path': 'modules/ui_extensions.py', 'status': 'modified', 'Loc': {"(None, 'extension_table', 136)": {'mod': [167]}, "(None, 'create_ui', 520)": {'mod': [540, 541, 542, 543, 544, 545]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"modules/extensions.py",
"modules/cmd_args.py",
"modules/ui_extensions.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AUTOMATIC1111 | stable-diffusion-webui | 67c884196d4627903f6598989251ec5b2c46a4ce | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/10036 | cannot-reproduce
bug-report | [Bug]: LoRa's wont work | ### Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
### What happened?
I have this error code when I use a LoRa, and they are not applied to the prompt
### Steps to reproduce the problem
Using any lora
### What should have happened?
LoRa's should be used
### Commit where the problem happens
5ab7f21
### What platforms do you use to access the UI ?
Windows
### What browsers do you use to access the UI ?
Mozilla Firefox
### Command Line Arguments
```Shell
--deepdanbooru --api --no-half-vae --xformers
```
### List of extensions
<html><body>
<!--StartFragment-->
DreamArtist-sd-webui-extension | https://github.com/7eu7d7/DreamArtist-sd-webui-extension.git | 12f80775 (Mon Apr 24 05:53:26 2023) | unknown
-- | -- | -- | --
<!--EndFragment-->
</body>
</html>DreamArtist-sd-webui-extension https://github.com/7eu7d7/DreamArtist-sd-webui-extension.git [12f80775 (Mon Apr 24 05:53:26 2023)](https://github.com/7eu7d7/DreamArtist-sd-webui-extension.git/commit/12f8077517b11199802f8d448d36ea573debae96) unknown
a1111-sd-webui-tagcomplete https://github.com/DominikDoom/a1111-sd-webui-tagcomplete [a2e7b6bf (Tue May 2 10:30:04 2023)](https://github.com/DominikDoom/a1111-sd-webui-tagcomplete/commit/a2e7b6bf6c8cbdff031b5b5929de150bf548c582) unknown
multi-subject-render https://github.com/Extraltodeus/multi-subject-render.git [03427e26 (Mon Mar 6 14:11:30 2023)](https://github.com/Extraltodeus/multi-subject-render.git/commit/03427e26bebdc6da0ccfb749bf3c4e7e33d7458b) unknown
openOutpaint-webUI-extension https://github.com/zero01101/openOutpaint-webUI-extension [5e84d6d5 (Mon Apr 10 23:01:41 2023)](https://github.com/zero01101/openOutpaint-webUI-extension/commit/5e84d6d5b1057f837eeecaa49a92a235dd589bc5) unknown
sd-webui-ar https://github.com/alemelis/sd-webui-ar.git [9df49dc2 (Wed Apr 12 09:23:17 2023)](https://github.com/alemelis/sd-webui-ar.git/commit/9df49dc2d7da7333ac918fbce926c2370a3b8b53) unknown
sd-webui-controlnet https://github.com/Mikubill/sd-webui-controlnet [a482867e (Tue May 2 23:13:18 2023)](https://github.com/Mikubill/sd-webui-controlnet/commit/a482867ee5e82b08b221c53662ff0c70c2f18d09) unknown
sd-webui-infinite-image-browsing https://github.com/zanllp/sd-webui-infinite-image-browsing.git [6bc7f4ca (Tue May 2 19:52:50 2023)](https://github.com/zanllp/sd-webui-infinite-image-browsing.git/commit/6bc7f4ca1e10e932e34453fb744d1bd006640b09) unknown
### Console logs
```Shell
Traceback (most recent call last):
File "C:\Users\jesus\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 215, in load_loras
lora = load_lora(name, lora_on_disk.filename)
File "C:\Users\jesus\stable-diffusion-webui\extensions-builtin\Lora\lora.py", line 176, in load_lora
module.weight.copy_(weight)
RuntimeError: output with shape [32, 320, 1, 1] doesn't match the broadcast shape [32, 320, 3, 3]
```
### Additional information
_No response_ | null | https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/10089 | null | {'base_commit': '67c884196d4627903f6598989251ec5b2c46a4ce', 'files': [{'path': 'extensions-builtin/Lora/lora.py', 'status': 'modified', 'Loc': {"(None, 'load_lora', 130)": {'add': [169], 'mod': [168]}, "(None, 'lora_calc_updown', 229)": {'add': [234]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"extensions-builtin/Lora/lora.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
python | cpython | 132fd38f13e127d87dc83c065bf14bf80a0a0c30 | https://github.com/python/cpython/issues/67206 | docs
stdlib
topic-unicode | string.printable.isprintable() returns False | BPO | [23017](https://bugs.python.org/issue23017)
--- | :---
Nosy | @birkenfeld, @vstinner, @ezio-melotti, @stevendaprano, @bitdancer, @4kir4, @iritkatriel
Files | <li>[bug-string-ascii.py](https://bugs.python.org/file37391/bug-string-ascii.py "Uploaded as text/plain at 2014-12-09.03:51:59 by planet36"): Test case shows that string.printable has control characters</li><li>[0001-Fix-string.printable-respect-POSIX-spec.patch](https://bugs.python.org/file37398/0001-Fix-string.printable-respect-POSIX-spec.patch "Uploaded as text/plain at 2014-12-09.14:42:29 by bru")</li><li>[docs-string.printable.diff](https://bugs.python.org/file37441/docs-string.printable.diff "Uploaded as text/plain at 2014-12-13.15:30:05 by @4kir4")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2014-12-09.03:52:01.009>
labels = ['type-bug', '3.9', '3.10', '3.11', 'library', 'expert-unicode', 'docs']
title = 'string.printable.isprintable() returns False'
updated_at = <Date 2021-11-29.16:17:13.755>
user = 'https://bugs.python.org/planet36'
```
bugs.python.org fields:
```python
activity = <Date 2021-11-29.16:17:13.755>
actor = 'iritkatriel'
assignee = 'docs@python'
closed = False
closed_date = None
closer = None
components = ['Documentation', 'Library (Lib)', 'Unicode']
creation = <Date 2014-12-09.03:52:01.009>
creator = 'planet36'
dependencies = []
files = ['37391', '37398', '37441']
hgrepos = []
issue_num = 23017
keywords = ['patch']
message_count = 5.0
messages = ['232343', '232376', '232382', '232613', '407290']
nosy_count = 10.0
nosy_names = ['georg.brandl', 'vstinner', 'ezio.melotti', 'steven.daprano', 'r.david.murray', 'docs@python', 'akira', 'planet36', 'bru', 'iritkatriel']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue23017'
versions = ['Python 3.9', 'Python 3.10', 'Python 3.11']
```
</p></details>
<!-- gh-linked-prs -->
### Linked PRs
* gh-128820
* gh-128867
* gh-128868
<!-- /gh-linked-prs -->
| null | https://github.com/python/cpython/pull/128820 | null | {'base_commit': 'eefd4a0bc764c0272c560f26dd10fb8fba0fb7d4', 'files': [{'path': 'Doc/library/string.rst', 'status': 'modified', 'Loc': {'(None, None, 64)': {'mod': [64, 65, 66]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"Doc/library/string.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
python | cpython | ed8dd598ae7e0d944974af0fd73c2fbb6105fd5c | https://github.com/python/cpython/issues/78453 | type-feature
tests
3.8 (EOL)
3.7 (EOL)
topic-C-API | Reorganize C API tests | BPO | [34272](https://bugs.python.org/issue34272)
--- | :---
Nosy | @rhettinger, @ezio-melotti, @voidspace, @serhiy-storchaka, @miss-islington, @tirkarthi
PRs | <li>python/cpython#8551</li><li>python/cpython#8567</li><li>python/cpython#8689</li><li>python/cpython#8690</li><li>python/cpython#8691</li><li>python/cpython#10078</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2018-07-29.12:56:00.979>
labels = ['3.7', 'expert-C-API', '3.8', 'type-feature', 'tests']
title = 'Reorganize C API tests'
updated_at = <Date 2019-12-09.16:13:39.037>
user = 'https://github.com/serhiy-storchaka'
```
bugs.python.org fields:
```python
activity = <Date 2019-12-09.16:13:39.037>
actor = 'vstinner'
assignee = 'none'
closed = False
closed_date = None
closer = None
components = ['Tests', 'C API']
creation = <Date 2018-07-29.12:56:00.979>
creator = 'serhiy.storchaka'
dependencies = []
files = []
hgrepos = []
issue_num = 34272
keywords = ['patch']
message_count = 8.0
messages = ['322635', '322648', '322656', '322682', '323207', '323209', '323210', '323211']
nosy_count = 6.0
nosy_names = ['rhettinger', 'ezio.melotti', 'michael.foord', 'serhiy.storchaka', 'miss-islington', 'xtreak']
pr_nums = ['8551', '8567', '8689', '8690', '8691', '10078']
priority = 'normal'
resolution = None
stage = 'patch review'
status = 'open'
superseder = None
type = 'enhancement'
url = 'https://bugs.python.org/issue34272'
versions = ['Python 3.6', 'Python 3.7', 'Python 3.8']
```
</p></details>
<!-- gh-pr-number: gh-99431 -->
* PR: gh-99431
<!-- /gh-pr-number -->
<!-- gh-pr-number: gh-99614 -->
* PR: gh-99614
<!-- /gh-pr-number -->
<!-- gh-pr-number: gh-99617 -->
* PR: gh-99617
<!-- /gh-pr-number -->
| null | https://github.com/python/cpython/pull/8551 | null | {'base_commit': '87f5180cd79617223ac513e9f45933f774134e32', 'files': []} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
python | cpython | 0d1cbff833f761f80383f4ce5fe31f686f3f04eb | https://github.com/python/cpython/issues/111259 | performance
topic-regex | Complementary re patterns such as [\s\S] or [\w\W] are much slower than . with DOTALL | # Bug report
### Bug description:
```python
import re
from time import perf_counter as time
p1 = re.compile(r"[\s\S]*")
p2 = re.compile(".*", re.DOTALL)
s = "a"*10000
for p in (p1,p2):
t0 = time()
for i in range(10000): _=p.match(s)
print(time()-t0)
```
Runtimes are 0.44 s vs 0.0016 s on my system. Instead of simplification, the [\s\S] is stepped through one after another. \s does not match so then \S is checked (the order [\S\s] is twice as fast for the string here). This is not solely an issue for larger matches. A 40 char string is processed half as fast when using [\s\S]. Even 10 chars take about 25% longer to process. I'm not completely sure whether this qualifies as a bug or an issue with documentation. Other languages don't have the DOTALL option and always rely on the first option. Plenty of posts on SO and elsewhere will thus advocate using [\s\S] as an all-matching regex pattern. Unsuspecting Python programmers such as @barneygale may expect [\s\S] to be identical to using a dot with DOTALL as seen below.
@serhiy-storchaka
https://github.com/python/cpython/blob/9bb202a1a90ef0edce20c495c9426d9766df11bb/Lib/pathlib.py#L126-L133
### CPython versions tested on:
3.11, 3.13
### Operating systems tested on:
Linux, Windows
<!-- gh-linked-prs -->
### Linked PRs
* gh-111303
* gh-120742
* gh-120745
* gh-120813
* gh-120814
<!-- /gh-linked-prs -->
| null | https://github.com/python/cpython/pull/111303 | null | {'base_commit': '0d1cbff833f761f80383f4ce5fe31f686f3f04eb', 'files': [{'path': 'Lib/pathlib.py', 'status': 'modified', 'Loc': {"(None, '_compile_pattern_lines', 105)": {'mod': [127, 130, 133]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"Lib/pathlib.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
python | cpython | 4219074127221fdbf545f908361da4ad98437b45 | https://github.com/python/cpython/issues/103971 | type-bug
interpreter-core
3.11
easy
triaged | Incorrect locations for code following `case` blocks | # Bug report
In the following example, the debugger hits a breakpoint that is set in the `aVariable = ...` line, which is in an if-statement whose condition is `False` and which should therefore not be executed. When I run the example with coverage (under PyCharm 2023.1), that line turns green. The print statement is _not_ executed, which matches the expectation.
The assignment does not actually happen. It somehow just _hits_ the line without really executing it.
Minimal reproducible example:
```
match 1:
case 1:
if False:
print('this should not be executed')
aVariable = 'somehow, we can hit a breakpoint here'
```
The same happens, if the last statement in the unreachable code is a _pass_. If I replace it with e.g. a `print()` statement, then everything behaves as expected.
If we extend the example a little bit, that behavior is reproducible for an unreachable _else_ block, too:
```
match 1:
case 1:
if True:
pass
else:
anotherVariable = 'somehow, we can hit a breakpoint here, too'
```
# Your environment
```
python --version
Python 3.11.3
```
```
lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 22.04.2 LTS
Release: 22.04
Codename: jammy
```
I initially encountered that behavior in a 3.10 version. Due to the fact that I thought, I could fix it with an upgrade to 3.11, I don't know the exact minor version of 3.10.
I double-checked this with the first online Python debugger that I could find and it behaves the same way.
<!-- gh-linked-prs -->
### Linked PRs
* gh-103980
* gh-103984
<!-- /gh-linked-prs -->
| null | https://github.com/python/cpython/pull/103980 | null | {'base_commit': '4219074127221fdbf545f908361da4ad98437b45', 'files': [{'path': 'Lib/test/test_patma.py', 'status': 'modified', 'Loc': {"('TestTracing', None, 3073)": {'add': [3153]}}}, {'path': 'Python/compile.c', 'status': 'modified', 'Loc': {"(None, 'compiler_match_inner', 7011)": {'add': [7059, 7083]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"Python/compile.c"
],
"doc": [],
"test": [
"Lib/test/test_patma.py"
],
"config": [],
"asset": []
} | 1 |
python | cpython | 63289b9dfbc7d87e81f1517422ee91b6b6d19531 | https://github.com/python/cpython/issues/117089 | Sync with importlib_metadata for Python 3.13 | This issue tracks incorporating updates from importlib_metadata into CPython for Python 3.13, including:
<!-- gh-linked-prs -->
### Linked PRs
* gh-117092
* gh-117094
<!-- /gh-linked-prs -->
| null | https://github.com/python/cpython/pull/117092 | null | {'base_commit': '63289b9dfbc7d87e81f1517422ee91b6b6d19531', 'files': [{'path': '.github/CODEOWNERS', 'status': 'modified', 'Loc': {'(None, None, 122)': {'mod': [122]}}}, {'path': 'Lib/test/test_importlib/fixtures.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [11]}, "('OnSysPath', 'setUp', 85)": {'add': [87]}, "('ZipFixtures', None, 350)": {'mod': [351]}}}, {'path': 'Makefile.pre.in', 'status': 'modified', 'Loc': {'(None, None, 2357)': {'add': [2357]}, '(None, None, 2354)': {'mod': [2354]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"Lib/test/test_importlib/fixtures.py"
],
"doc": [],
"test": [],
"config": [
".github/CODEOWNERS",
"Makefile.pre.in"
],
"asset": []
} | 1 | |
python | cpython | 4c3b283e83459cf7224bbf353300099eba7a2c1c | https://github.com/python/cpython/issues/87192 | type-bug
docs
3.10
3.9 | Missing words renders meaning unclear in fcntl.html | BPO | [43026](https://bugs.python.org/issue43026)
--- | :---
Nosy | @EzraBC
Files | <li>[meaning_unclear.png](https://bugs.python.org/file49766/meaning_unclear.png "Uploaded as image/png at 2021-01-25.23:46:46 by @EzraBC")</li>
<sup>*Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.*</sup>
<details><summary>Show more details</summary><p>
GitHub fields:
```python
assignee = None
closed_at = None
created_at = <Date 2021-01-25.23:46:46.269>
labels = ['type-bug', '3.9', '3.10', 'docs']
title = 'Missing words renders meaning unclear in fcntl.html'
updated_at = <Date 2021-01-26.01:22:47.436>
user = 'https://github.com/EzraBC'
```
bugs.python.org fields:
```python
activity = <Date 2021-01-26.01:22:47.436>
actor = 'EzraBC'
assignee = 'docs@python'
closed = False
closed_date = None
closer = None
components = ['Documentation']
creation = <Date 2021-01-25.23:46:46.269>
creator = 'EzraBC'
dependencies = []
files = ['49766']
hgrepos = []
issue_num = 43026
keywords = []
message_count = 1.0
messages = ['385680']
nosy_count = 2.0
nosy_names = ['docs@python', 'EzraBC']
pr_nums = []
priority = 'normal'
resolution = None
stage = None
status = 'open'
superseder = None
type = 'behavior'
url = 'https://bugs.python.org/issue43026'
versions = ['Python 3.9', 'Python 3.10']
```
</p></details>
| null | https://github.com/python/cpython/pull/91658 | null | {'base_commit': '4c3b283e83459cf7224bbf353300099eba7a2c1c', 'files': [{'path': 'Doc/library/fcntl.rst', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [40]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"Doc/library/fcntl.rst"
],
"test": [],
"config": [],
"asset": []
} | 1 |
python | cpython | 733e15f1707ddec502a69c8c324c77e02ca11fa9 | https://github.com/python/cpython/issues/93735 | type-feature
docs
3.11
3.10
3.12 | Run documentation CI from pre-built Python | https://github.com/python/core-workflow/issues/459
There seemed to be general agreement.
A | null | https://github.com/python/cpython/pull/93736 | null | {'base_commit': '733e15f1707ddec502a69c8c324c77e02ca11fa9', 'files': [{'path': '.github/workflows/doc.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25, 34], 'mod': [43, 44, 45, 46, 49, 50, 51, 52, 53, 54, 55, 56]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
".github/workflows/doc.yml"
],
"test": [],
"config": [],
"asset": []
} | 1 |
THUDM | ChatGLM-6B | 2873a6f452340565ff3cd130d5f7009a35c12154 | https://github.com/THUDM/ChatGLM-6B/issues/493 | [BUG/Help] 运行cli_demo.py时报错UnicodeDecodeError: 'utf-8' codec can't decode byte | ### Is there an existing issue for this?
- [X] I have searched the existing issues
### Current Behavior
Traceback (most recent call last):
File "cli_demo.py", line 57, in <module>
main()
File "cli_demo.py", line 33, in main
query = input("\n用户:")
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xe6 in position 6: invalid continuation byte
### Expected Behavior
_No response_
### Steps To Reproduce
python cli_demo.py | null | https://github.com/THUDM/ChatGLM-6B/pull/934 | null | {'base_commit': '2873a6f452340565ff3cd130d5f7009a35c12154', 'files': [{'path': 'cli_demo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"cli_demo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
huggingface | transformers | b9af152efb748b1bff8f6fe0130e62ebb8e11a53 | https://github.com/huggingface/transformers/issues/21330 | New model
Good First Issue | Add XLM-V | ### Model description
[XLM-V: Overcoming the Vocabulary Bottleneck in Multilingual Masked Language Models](https://arxiv.org/abs/2301.10472)
Large multilingual language models typically rely on a single vocabulary shared across 100+ languages. As these models have increased in parameter count and depth, vocabulary size has remained largely unchanged. This vocabulary bottleneck limits the representational capabilities of multilingual models like XLM-R. In this paper, we introduce a new approach for scaling to very large multilingual vocabularies by de-emphasizing token sharing between languages with little lexical overlap and assigning vocabulary capacity to achieve sufficient coverage for each individual language. Tokenizations using our vocabulary are typically more semantically meaningful and shorter compared to XLM-R. Leveraging this improved vocabulary, we train XLM-V, a multilingual language model with a one million token vocabulary. XLM-V outperforms XLM-R on every task we tested on ranging from natural language inference (XNLI), question answering (MLQA, XQuAD, TyDiQA), and named entity recognition (WikiAnn) to low-resource tasks (Americas NLI, MasakhaNER).
Should work as [XLM-RoBERTa](https://twitter.com/LiangDavis/status/1618738467315531777?s=20&t=nObyGbBEqmBZr9rmTEAeVg)
### Open source status
- [X] The model implementation is available
- [X] The model weights are available
### Provide useful links for the implementation
_No response_ | null | https://github.com/huggingface/transformers/pull/21498 | null | {'base_commit': 'b9af152efb748b1bff8f6fe0130e62ebb8e11a53', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [444]}}}, {'path': 'README_es.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [437]}}}, {'path': 'README_hd.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [409]}}}, {'path': 'README_ja.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [471]}}}, {'path': 'README_ko.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [386]}}}, {'path': 'README_zh-hans.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [410]}}}, {'path': 'README_zh-hant.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [422]}}}, {'path': 'docs/source/de/index.mdx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [184]}}}, {'path': 'docs/source/en/_toctree.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [393]}}}, {'path': 'docs/source/en/index.mdx', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [223]}}}, {'path': 'src/transformers/models/auto/configuration_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [535]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"src/transformers/models/auto/configuration_auto.py"
],
"doc": [
"README_hd.md",
"docs/source/en/_toctree.yml",
"README_zh-hans.md",
"README.md",
"README_es.md",
"docs/source/de/index.mdx",
"README_zh-hant.md",
"README_ko.md",
"README_ja.md",
"docs/source/en/index.mdx"
],
"test": [],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | b8378b658e9846e647d15a8fd85ad1421326b1e5 | https://github.com/huggingface/transformers/issues/28007 | Can't do word timestamps and beam search at the same time (whisper) | ### System Info
Tested on python 3.8.10, transformers 4.36.0.dev0
### Who can help?
@ArthurZucker @sanchit-gandhi (suggested by peregilk)
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
```
from transformers import pipeline
import torch
model = "NbAiLabBeta/nb-whisper-base"
device = "cuda:0"
p = pipeline("automatic-speech-recognition",
model,
torch_dtype=torch.float16,
device=device,
return_timestamps="word")
args = {"language": "norwegian", "task": "transcribe", "num_beams": 3}
outputs = p(audiofile,
chunk_length_s=28,
batch_size=6,
generate_kwargs=args)
```
Fails with:
> Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py", line 357, in __call__
return super().__call__(inputs, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1132, in __call__
return next(
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 124, in __next__
item = next(self.iterator)
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/pt_utils.py", line 266, in __next__
processed = self.infer(next(self.iterator), **self.params)
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 1046, in forward
model_outputs = self._forward(model_inputs, **forward_params)
File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/automatic_speech_recognition.py", line 552, in _forward
generate_kwargs["num_frames"] = stride[0] // self.feature_extractor.hop_length
TypeError: unsupported operand type(s) for //: 'tuple' and 'int'
It works with *either* num_beams:1 OR return_timestamps=True/False, but not combined.
### Expected behavior
It should return processed data. :) | null | https://github.com/huggingface/transformers/pull/28114 | null | {'base_commit': 'b8378b658e9846e647d15a8fd85ad1421326b1e5', 'files': [{'path': 'src/transformers/models/whisper/modeling_whisper.py', 'status': 'modified', 'Loc': {"('WhisperForConditionalGeneration', 'generate', 1859)": {'add': [2226]}, "('WhisperForConditionalGeneration', '_extract_token_timestamps', 2539)": {'add': [2557], 'mod': [2559, 2561, 2562, 2563, 2564, 2566, 2567, 2569, 2572, 2573]}}}, {'path': 'src/transformers/pipelines/automatic_speech_recognition.py', 'status': 'modified', 'Loc': {"('AutomaticSpeechRecognitionPipeline', '_forward', 533)": {'mod': [562]}}}, {'path': 'tests/models/whisper/test_modeling_whisper.py', 'status': 'modified', 'Loc': {"('WhisperModelIntegrationTests', None, 1447)": {'add': [1852]}}}, {'path': 'tests/pipelines/test_pipelines_automatic_speech_recognition.py', 'status': 'modified', 'Loc': {"('AutomaticSpeechRecognitionPipelineTests', None, 60)": {'add': [676]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"src/transformers/models/whisper/modeling_whisper.py",
"src/transformers/pipelines/automatic_speech_recognition.py"
],
"doc": [],
"test": [
"tests/models/whisper/test_modeling_whisper.py",
"tests/pipelines/test_pipelines_automatic_speech_recognition.py"
],
"config": [],
"asset": []
} | 1 | |
huggingface | transformers | b231a413f5d58592bb4d98304c3d3b668c5d4a42 | https://github.com/huggingface/transformers/issues/4657 | PyTorch | --fp causes an issue when running example scripts in distributed mode | # 🐛 Bug
## Information
Model I am using (Bert, XLNet ...):
`roberta-large`
Language I am using the model on (English, Chinese ...):
`English`
The problem arises when using:
* the official example scripts
The tasks I am working on is:
* Finetuning a LM with `run_language_modeling.py` and the SST-2 task with `run_glue.py`
* my own dataset
## To reproduce
If I run either of the following commands, I get the error included below. However, if I remove `--fp`, everything works normally. Also, if I add `--fp`, but run it non-distributed, everything works normally. So, it appears there is an issue with my running `-fp` in a distributed fashion. I haven't had an issue with this before; so, I'm not sure what the problem is. Any ideas? Thanks in advance.
I installed apex in two different way, but still get the same results.
```
#Install package required for fp16 computations
RUN git clone https://github.com/NVIDIA/apex.git \
&& cd apex \
&& python3 setup.py install --cuda_ext --cpp_ext
```
```
Install package required for fp16 computations
RUN git clone https://github.com/NVIDIA/apex.git \
&& cd apex \
&& pip3 install -v --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" ./
```
```
python3 -m torch.distributed.launch --nproc_per_node 2 run_language_modeling.py --output_dir=/ptcc/shared/lm_roberta_20200528_164228 --model_type=roberta --do_train --train_data_file=/ptcc/data/train.txt --do_eval --eval_data_file=/ptcc/data/test.txt --evaluate_during_training --per_gpu_train_batch_size=2 --per_gpu_eval_batch_size=2 --learning_rate=5e-06 --model_name_or_path=roberta-large --mlm --max_steps=120000 --warmup_steps=10000 --save_steps=12000 --seed=42 --fp16 --logging_dir=/ptcc/shared/roberta_20200528_164228_tf_logs'
```
```
python3 -m torch.distributed.launch --nproc_per_node 2 run_glue.py --model_type roberta --task_name SST-2 --do_train --do_eval --evaluate_during_training --data_dir /ptcc/data/ --per_gpu_train_batch_size 2 --per_gpu_eval_batch_size 2 --learning_rate 1e-06 --output_dir clf_roberta_20200528_162937 --model_name_or_path /ptcc/shared/lm_roberta_20200528_113420 --num_train_epochs 2.0 --save_steps 1000 --seed 42 --fp16 --logging_dir=/ptcc/shared/roberta_20200528_162937_tf_logs
```
```
ptcc_1 | 05/28/2020 20:30:38 - INFO - transformers.trainer - Starting fine-tuning.
Epoch: 0%| | 0/2 [00:00<?, ?it/s] Traceback (most recent call last):
ptcc_1 | File "/ptcc/run_glue.py", line 228, in <module>
ptcc_1 | main()
ptcc_1 | File "/ptcc/run_glue.py", line 160, in main
ptcc_1 | model_path=model_args.model_name_or_path if os.path.isdir(model_args.model_name_or_path) else None
ptcc_1 | File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 470, in train
ptcc_1 | tr_loss += self._training_step(model, inputs, optimizer)
ptcc_1 | File "/usr/local/lib/python3.6/dist-packages/transformers/trainer.py", line 577, in _training_step
ptcc_1 | scaled_loss.backward()
ptcc_1 | File "/usr/lib/python3.6/contextlib.py", line 88, in __exit__
ptcc_1 | next(self.gen)
ptcc_1 | File "/usr/local/lib/python3.6/dist-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/handle.py", line 127, in scale_loss
ptcc_1 | should_skip = False if delay_overflow_check else loss_scaler.update_scale()
ptcc_1 | File "/usr/local/lib/python3.6/dist-packages/apex-0.1-py3.6-linux-x86_64.egg/apex/amp/scaler.py", line 200, in update_scale
ptcc_1 | self._has_overflow = self._overflow_buf.item()
ptcc_1 | RuntimeError: CUDA error: an illegal memory access was encountered
ptcc_1 | /usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:114: UserWarning: Seems like `optimizer.step()` has been overridden after learning rate scheduler initialization. Please, make sure to call `optimizer.step()` before `lr_scheduler.step()`. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
ptcc_1 | "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
ptcc_1 | terminate called after throwing an instance of 'c10::Error'
ptcc_1 | what(): CUDA error: an illegal memory access was encountered (insert_events at /pytorch/c10/cuda/CUDACachingAllocator.cpp:771)
ptcc_1 | frame #0: c10::Error::Error(c10::SourceLocation, std::string const&) + 0x46 (0x7f69777f6536 in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
ptcc_1 | frame #1: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x7ae (0x7f6977a39fbe in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10_cuda.so)
ptcc_1 | frame #2: c10::TensorImpl::release_resources() + 0x4d (0x7f69777e6abd in /usr/local/lib/python3.6/dist-packages/torch/lib/libc10.so)
ptcc_1 | frame #3: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x1d9 (0x7f69c3926ef9 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #4: c10d::Reducer::~Reducer() + 0x23a (0x7f69c391c84a in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #5: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7f69c38fb7c2 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #6: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f69c32be466 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #7: <unknown function> + 0x87146b (0x7f69c38fc46b in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #8: <unknown function> + 0x240500 (0x7f69c32cb500 in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #9: <unknown function> + 0x24174e (0x7f69c32cc74e in /usr/local/lib/python3.6/dist-packages/torch/lib/libtorch_python.so)
ptcc_1 | frame #10: /usr/bin/python3() [0x572a27]
ptcc_1 | frame #11: /usr/bin/python3() [0x54eef2]
ptcc_1 | frame #12: /usr/bin/python3() [0x588948]
ptcc_1 | frame #13: /usr/bin/python3() [0x5ad438]
ptcc_1 | frame #14: /usr/bin/python3() [0x5ad44e]
ptcc_1 | frame #15: /usr/bin/python3() [0x5ad44e]
ptcc_1 | frame #16: /usr/bin/python3() [0x56b276]
ptcc_1 | frame #17: PyDict_SetItemString + 0x153 (0x5709f3 in /usr/bin/python3)
ptcc_1 | frame #18: PyImport_Cleanup + 0x76 (0x4f2fc6 in /usr/bin/python3)
ptcc_1 | frame #19: Py_FinalizeEx + 0x5e (0x637e2e in /usr/bin/python3)
ptcc_1 | frame #20: Py_Main + 0x395 (0x638e95 in /usr/bin/python3)
ptcc_1 | frame #21: main + 0xe0 (0x4b0d00 in /usr/bin/python3)
ptcc_1 | frame #22: __libc_start_main + 0xe7 (0x7f69e4727b97 in /lib/x86_64-linux-gnu/libc.so.6)
ptcc_1 | frame #23: _start + 0x2a (0x5b250a in /usr/bin/python3)
```
## Environment info
- `transformers` version: 2.10.0
- Platform: Linux-5.3.0-26-generic-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.6.9
- PyTorch version (GPU?): 1.5.0 (True)
- Tensorflow version (GPU?): not installed (NA)
- Using GPU in script?: Y, 2 Tesla V100-SXM2
- Using distributed or parallel set-up in script?: Y, 2 Tesla V100-SXM2
| null | https://github.com/huggingface/transformers/pull/4728 | null | {'base_commit': 'b231a413f5d58592bb4d98304c3d3b668c5d4a42', 'files': [{'path': 'src/transformers/training_args.py', 'status': 'modified', 'Loc': {"('TrainingArguments', '_setup_devices', 158)": {'add': [176], 'mod': [169]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"src/transformers/training_args.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | 85a1269e19af022e04bc2aad82572cd5a9e8cdd9 | https://github.com/huggingface/transformers/issues/31778 | Audio | Bug in whisper word-level timestamps (`tokenizer._decode_asr`) | ### System Info
- `transformers` version: 4.42.3
- Platform: Linux-6.1.85+-x86_64-with-glibc2.35
- Python version: 3.10.12
- Huggingface_hub version: 0.23.4
- Safetensors version: 0.4.3
- Accelerate version: not installed
- Accelerate config: not found
- PyTorch version (GPU?): 2.3.0+cu121 (False)
- Tensorflow version (GPU?): 2.15.0 (False)
- Flax version (CPU?/GPU?/TPU?): 0.8.4 (cpu)
- Jax version: 0.4.26
- JaxLib version: 0.4.26
- Using distributed or parallel set-up in script?: no
### Who can help?
@sanchit-gandhi
### Information
- [ ] The official example scripts
- [ ] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [ ] My own task or dataset (give details below)
### Reproduction
Minimal reproduction:
```py
import torch
model_outputs = [
{
'stride': [30, 0, 5],
'tokens': torch.tensor([[
50257, 50362, 8410, 7283, 0, 2329,
8410, 7283, 0, 2094, 470, 1309,
534, 10625, 307, 10625, 13, 34668,
11, 345, 531, 9439, 11, 523,
655, 8410, 7283, 0, 39134, 16592,
10560, 3955, 50, 0, 7102, 5446,
46, 0, 25848, 8410, 7283, 0,
2773, 661, 4320, 1943, 981, 345,
821, 8066, 7765, 510, 290, 670,
1327, 379, 340, 13, 10528, 318,
5340, 13, 50256
]]),
'token_timestamps': torch.tensor([[
0, 0, 0, 3.78, 4.22, 5.26, 6.04,
6.54, 7, 7.94, 8.58, 8.58, 8.88, 9.16,
9.54, 9.94, 10.6, 11.38, 11.88, 12.38, 12.44,
12.62, 13, 13.36, 13.64, 14.24, 14.74, 15.12,
15.4, 15.74, 16.1, 16.54, 16.54, 16.78, 17.08,
17.2, 17.36, 17.56, 18.08, 18.58, 19.38, 19.88,
22.54, 22.9, 23.24, 23.5, 24.14, 24.56, 24.7,
24.94, 24.94, 25.18, 25.54, 25.72, 26.04, 26.34,
26.46, 26.84, 27.04, 27.14, 27.54, 28.06, 29.92
]])
},
{
'stride': [30, 5, 5],
'tokens': torch.tensor([[
50257, 50362, 2773, 661, 4320, 1943, 981,
345, 821, 8066, 7765, 510, 290, 670,
1327, 379, 340, 13, 10528, 318, 5340,
13, 921, 815, 651, 284, 262, 966,
810, 2687, 2073, 561, 11238, 290, 345,
821, 407, 8066, 2245, 612, 13, 1400,
11, 644, 389, 345, 4953, 329, 30,
2141, 340, 0, 2329, 466, 340, 0,
3363, 11, 345, 460, 0, 2329, 466,
340, 0, 50256
]]),
'token_timestamps': torch.tensor([[
0, 0, 0, 2.92, 3.24, 3.5, 4.14,
4.56, 4.7, 4.74, 4.92, 5.18, 5.54, 5.74,
6.04, 6.34, 6.46, 6.84, 7.04, 7.18, 7.56,
8.12, 9.68, 10.7, 10.88, 11.1, 11.24, 11.48,
11.82, 12.46, 12.82, 13.2, 13.46, 13.72, 14.08,
14.28, 14.34, 14.56, 14.82, 15.16, 15.72, 16.42,
16.82, 16.86, 17, 17.1, 17.2, 17.56, 18.06,
19.28, 19.6, 20.28, 21.96, 22.64, 24.28, 24.76,
25.18, 25.56, 25.56, 25.84, 26.36, 27.12, 27.54,
27.82, 28.16, 29.48
]])
},
{
'stride': [23.7728125, 5, 0],
'tokens': torch.tensor([[
50257, 50362, 2329, 466,
340, 0, 3363, 345,
460, 0, 2329, 466,
340, 0, 1002, 534,
15867, 318, 3599, 625,
11, 2245, 3501, 510,
13, 50256
]]),
'token_timestamps': torch.tensor([[
0, 0, 0, 2.44, 4.3,
5.04, 5.06, 5.56, 5.8, 6.32,
7.12, 7.56, 7.8, 8.72, 10.04,
12.96, 13.3, 13.44, 13.72, 13.98,
14.86, 15.5, 16, 16.88, 17.76,
20.9
]])
}
]
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('onnx-community/whisper-tiny.en_timestamped')
tokenizer._decode_asr(model_outputs, return_timestamps='word', return_language=False, time_precision=0.02)
```
produces the following **incorrect** transcript:
```py
(" DO IT! Just DO IT! Don't let your dreams be dreams. Yesterday, you said tomorrow, so just DO IT! MAKE YOUR DRIMS! CONTRO! JUST DO IT! Some people dream success while you're gonna wake up and work hard at it. Nothing is impossible. You should get to the point where anyone else would quit and you're not gonna stop there. No, what are you waiting for? Do it! Just do it! Yes, you can! Just do it! Yes you can! Just do it! If your tire is starting over, stop giving up.",
{'chunks': [{'text': ' DO', 'timestamp': (0.0, 3.78)},
{'text': ' IT!', 'timestamp': (3.78, 5.26)},
{'text': ' Just', 'timestamp': (5.26, 6.04)},
{'text': ' DO', 'timestamp': (6.04, 6.54)},
{'text': ' IT!', 'timestamp': (6.54, 7.94)},
{'text': " Don't", 'timestamp': (7.94, 8.58)},
{'text': ' let', 'timestamp': (8.58, 8.88)},
{'text': ' your', 'timestamp': (8.88, 9.16)},
{'text': ' dreams', 'timestamp': (9.16, 9.54)},
{'text': ' be', 'timestamp': (9.54, 9.94)},
{'text': ' dreams.', 'timestamp': (9.94, 11.38)},
{'text': ' Yesterday,', 'timestamp': (11.38, 12.38)},
{'text': ' you', 'timestamp': (12.38, 12.44)},
{'text': ' said', 'timestamp': (12.44, 12.62)},
{'text': ' tomorrow,', 'timestamp': (12.62, 13.36)},
{'text': ' so', 'timestamp': (13.36, 13.64)},
{'text': ' just', 'timestamp': (13.64, 14.24)},
{'text': ' DO', 'timestamp': (14.24, 14.74)},
{'text': ' IT!', 'timestamp': (14.74, 15.4)},
{'text': ' MAKE', 'timestamp': (15.4, 15.74)},
{'text': ' YOUR', 'timestamp': (15.74, 16.1)},
{'text': ' DRIMS!', 'timestamp': (16.1, 17.08)},
{'text': ' CONTRO!', 'timestamp': (17.08, 18.08)},
{'text': ' JUST', 'timestamp': (18.08, 18.58)},
{'text': ' DO', 'timestamp': (18.58, 19.38)},
{'text': ' IT!', 'timestamp': (19.38, 22.54)},
{'text': ' Some', 'timestamp': (22.54, 22.9)},
{'text': ' people', 'timestamp': (22.9, 23.24)},
{'text': ' dream', 'timestamp': (23.24, 23.5)},
{'text': ' success', 'timestamp': (23.5, 24.14)},
{'text': ' while', 'timestamp': (24.14, 24.56)},
{'text': " you're", 'timestamp': (24.56, 24.94)},
{'text': ' gonna', 'timestamp': (24.94, 24.94)},
{'text': ' wake', 'timestamp': (24.94, 25.18)},
{'text': ' up', 'timestamp': (25.18, 25.54)},
{'text': ' and', 'timestamp': (25.54, 25.74)},
{'text': ' work', 'timestamp': (25.74, 26.04)},
{'text': ' hard', 'timestamp': (26.04, 26.34)},
{'text': ' at', 'timestamp': (26.34, 26.46)},
{'text': ' it.', 'timestamp': (26.46, 27.04)},
{'text': ' Nothing', 'timestamp': (27.04, 27.18)},
{'text': ' is', 'timestamp': (27.18, 27.56)},
{'text': ' impossible.', 'timestamp': (27.56, 29.68)},
{'text': ' You', 'timestamp': (29.68, 30.7)},
{'text': ' should', 'timestamp': (30.7, 30.88)},
{'text': ' get', 'timestamp': (30.88, 31.1)},
{'text': ' to', 'timestamp': (31.1, 31.24)},
{'text': ' the', 'timestamp': (31.24, 31.48)},
{'text': ' point', 'timestamp': (31.48, 31.82)},
{'text': ' where', 'timestamp': (31.82, 32.46)},
{'text': ' anyone', 'timestamp': (32.46, 32.82)},
{'text': ' else', 'timestamp': (32.82, 33.2)},
{'text': ' would', 'timestamp': (33.2, 33.46)},
{'text': ' quit', 'timestamp': (33.46, 33.72)},
{'text': ' and', 'timestamp': (33.72, 34.08)},
{'text': " you're", 'timestamp': (34.08, 34.34)},
{'text': ' not', 'timestamp': (34.34, 34.56)},
{'text': ' gonna', 'timestamp': (34.56, 34.82)},
{'text': ' stop', 'timestamp': (34.82, 35.16)},
{'text': ' there.', 'timestamp': (35.16, 36.42)},
{'text': ' No,', 'timestamp': (36.42, 36.86)},
{'text': ' what', 'timestamp': (36.86, 37.0)},
{'text': ' are', 'timestamp': (37.0, 37.1)},
{'text': ' you', 'timestamp': (37.1, 37.2)},
{'text': ' waiting', 'timestamp': (37.2, 37.56)},
{'text': ' for?', 'timestamp': (37.56, 39.28)},
{'text': ' Do', 'timestamp': (39.28, 39.6)},
{'text': ' it!', 'timestamp': (39.6, 41.96)},
{'text': ' Just', 'timestamp': (41.96, 42.64)},
{'text': ' do', 'timestamp': (42.64, 44.28)},
{'text': ' it!', 'timestamp': (44.28, 45.18)},
{'text': ' Yes,', 'timestamp': (45.18, 45.56)},
{'text': ' you', 'timestamp': (45.56, 45.84)},
{'text': ' can!', 'timestamp': (45.84, 47.12)},
{'text': ' Just', 'timestamp': (47.12, 47.54)},
{'text': ' do', 'timestamp': (47.54, 47.82)},
{'text': ' it!', 'timestamp': (44.3, 45.06)},
{'text': ' Yes', 'timestamp': (45.06, 45.56)},
{'text': ' you', 'timestamp': (45.56, 45.8)},
{'text': ' can!', 'timestamp': (45.8, 47.12)},
{'text': ' Just', 'timestamp': (47.12, 47.56)},
{'text': ' do', 'timestamp': (47.56, 47.8)},
{'text': ' it!', 'timestamp': (47.8, 50.04)},
{'text': ' If', 'timestamp': (50.04, 52.96)},
{'text': ' your', 'timestamp': (52.96, 53.3)},
{'text': ' tire', 'timestamp': (53.3, 53.44)},
{'text': ' is', 'timestamp': (53.44, 53.72)},
{'text': ' starting', 'timestamp': (53.72, 53.98)},
{'text': ' over,', 'timestamp': (53.98, 55.5)},
{'text': ' stop', 'timestamp': (55.5, 56.0)},
{'text': ' giving', 'timestamp': (56.0, 56.88)},
{'text': ' up.', 'timestamp': (56.88, 60.9)}]})
```
(Notice at ~46 seconds, it goes back in time):
```py
{'text': ' Yes,', 'timestamp': (45.18, 45.56)},
{'text': ' you', 'timestamp': (45.56, 45.84)},
{'text': ' can!', 'timestamp': (45.84, 47.12)},
{'text': ' Just', 'timestamp': (47.12, 47.54)},
{'text': ' do', 'timestamp': (47.54, 47.82)},
{'text': ' it!', 'timestamp': (44.3, 45.06)},
{'text': ' Yes', 'timestamp': (45.06, 45.56)},
{'text': ' you', 'timestamp': (45.56, 45.8)},
{'text': ' can!', 'timestamp': (45.8, 47.12)},
{'text': ' Just', 'timestamp': (47.12, 47.56)},
{'text': ' do', 'timestamp': (47.56, 47.8)},
{'text': ' it!', 'timestamp': (47.8, 50.04)},
```
For reference, [this](https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/whisper-timestamps-demo.mp4?download=true) is the media I am transcribing.
### Expected behavior
1. The transcript times should be increasing.
2. If you watch the [video](https://huggingface.co/datasets/Xenova/transformers.js-docs/resolve/main/whisper-timestamps-demo.mp4?download=true), it's clear that the repeated phrasing messes something up, duplicating this in the merged output.
3. Result should be something like:
```diff
{'text': ' Do', 'timestamp': (39.28, 39.6)},
{'text': ' it!', 'timestamp': (39.6, 41.96)},
{'text': ' Just', 'timestamp': (41.96, 42.64)},
{'text': ' do', 'timestamp': (42.64, 44.28)},
{'text': ' it!', 'timestamp': (44.28, 45.18)},
- {'text': ' Yes,', 'timestamp': (45.18, 45.56)},
- {'text': ' you', 'timestamp': (45.56, 45.84)},
- {'text': ' can!', 'timestamp': (45.84, 47.12)},
- {'text': ' Just', 'timestamp': (47.12, 47.54)},
- {'text': ' do', 'timestamp': (47.54, 47.82)},
- {'text': ' it!', 'timestamp': (44.3, 45.06)},
- {'text': ' Yes', 'timestamp': (45.06, 45.56)},
+ {'text': ' Yes', 'timestamp': (45.18, 45.56)},
{'text': ' you', 'timestamp': (45.56, 45.8)},
{'text': ' can!', 'timestamp': (45.8, 47.12)},
{'text': ' Just', 'timestamp': (47.12, 47.56)},
{'text': ' do', 'timestamp': (47.56, 47.8)},
{'text': ' it!', 'timestamp': (47.8, 50.04)},
``` | null | https://github.com/huggingface/transformers/pull/32197 | null | {'base_commit': '85a1269e19af022e04bc2aad82572cd5a9e8cdd9', 'files': [{'path': 'src/transformers/models/whisper/tokenization_whisper.py', 'status': 'modified', 'Loc': {"(None, '_find_longest_common_sequence', 1107)": {'mod': [1177]}}}, {'path': 'tests/models/whisper/test_tokenization_whisper.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [340]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"src/transformers/models/whisper/tokenization_whisper.py"
],
"doc": [],
"test": [
"tests/models/whisper/test_tokenization_whisper.py"
],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | 1681a6d452b60ff3652a96f03541dfa491124192 | https://github.com/huggingface/transformers/issues/20650 | New model | [New Model] UDOP: Unifying Vision, Text, and Layout for Universal Document Processing | ### Model description
We propose Universal Document Processing (UDOP), a foundation Document AI model which unifies text, image, and layout modalities together with varied task formats, including document understanding and generation. UDOP leverages the spatial correlation between textual content and document image to model image, text, and layout modalities with one uniform representation. With a novel Vision-Text-Layout Transformer, UDOP unifies pretraining and multi-domain downstream tasks into a prompt-based sequence generation scheme. UDOP is pretrained on both large-scale unlabeled document corpora using innovative self-supervised objectives and diverse labeled data. UDOP also learns to generate document images from text and layout modalities via masked image reconstruction. To the best of our knowledge, this is the first time in the field of document AI that one model simultaneously achieves high-quality neural document editing and content customization. Our method sets the state-of-the-art on 9 Document AI tasks, e.g., document understanding and QA, across diverse data domains like finance reports, academic papers, and websites. UDOP ranks first on the leaderboard of the Document Understanding Benchmark (DUE).
### Open source status
- [x] The model implementation is available
- [x] The model weights are available
### Provide useful links for the implementation
UDOP Paper: https://arxiv.org/abs/2212.02623
UDOP Repo: https://github.com/microsoft/UDOP
UDOP Model Weights: https://huggingface.co/ZinengTang/Udop/tree/main | null | https://github.com/huggingface/transformers/pull/22940 | null | {'base_commit': '1681a6d452b60ff3652a96f03541dfa491124192', 'files': [{'path': '.circleci/create_circleci_config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [477, 487]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [513]}}}, {'path': 'README_es.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [486]}}}, {'path': 'README_fr.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [507]}}}, {'path': 'README_hd.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [460]}}}, {'path': 'README_ja.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [520]}}}, {'path': 'README_ko.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [435]}}}, {'path': 'README_zh-hans.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [459]}}}, {'path': 'README_zh-hant.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [471]}}}, {'path': 'docs/source/en/_toctree.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [772]}}}, {'path': 'docs/source/en/index.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [281]}}}, {'path': 'src/transformers/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [858, 1137, 1216, 3413, 5642, 5917, 5989, 7829]}}}, {'path': 'src/transformers/convert_slow_tokenizer.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1041, 1473]}}}, {'path': 'src/transformers/models/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [222]}}}, {'path': 'src/transformers/models/auto/configuration_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [233, 456, 717]}}}, {'path': 'src/transformers/models/auto/image_processing_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [110]}}}, {'path': 'src/transformers/models/auto/modeling_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [221]}}}, {'path': 'src/transformers/models/auto/tokenization_auto.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [420]}}}, {'path': 'src/transformers/utils/dummy_pt_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8343]}}}, {'path': 'src/transformers/utils/dummy_sentencepiece_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [221]}}}, {'path': 'src/transformers/utils/dummy_tokenizers_objects.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [410]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"src/transformers/utils/dummy_pt_objects.py",
"src/transformers/__init__.py",
"src/transformers/models/auto/tokenization_auto.py",
"src/transformers/models/__init__.py",
"src/transformers/models/auto/configuration_auto.py",
".circleci/create_circleci_config.py",
"src/transformers/utils/dummy_tokenizers_objects.py",
"src/transformers/convert_slow_tokenizer.py",
"src/transformers/models/auto/modeling_auto.py",
"src/transformers/utils/dummy_sentencepiece_objects.py",
"src/transformers/models/auto/image_processing_auto.py"
],
"doc": [
"docs/source/en/_toctree.yml",
"README_fr.md",
"README_hd.md",
"README_zh-hans.md",
"README_zh-hant.md",
"README_ja.md",
"README.md",
"README_es.md",
"README_ko.md",
"docs/source/en/index.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
huggingface | transformers | 4b423e607455a7aca1edc4beaa713da58e78ef0b | https://github.com/huggingface/transformers/issues/18068 | bug | StoppingCriteria "scores" is always None | ### System Info
I've written a custom StoppingCriteria subclass and I'm trying to utilize the `scores` in my decision logic, but I'm finding that `scores` is always `None`. Is that intentional?
### Who can help?
@patrickvonplaten, @Narsil, @gante
### Information
- [ ] The official example scripts
- [X] My own modified scripts
### Tasks
- [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...)
- [X] My own task or dataset (give details below)
### Reproduction
```
class TopPredictionOutsideTargetSetStoppingCriteria(StoppingCriteria):
def __init__(self, priority_tokens_ids: list):
self.priority_token_ids = priority_tokens_ids
def __call__(self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs) -> bool:
print(f"TopPred SCORES? {scores}, input_ids: {input_ids}") # <--- "scores" is None but "input_ids" is correct
top = torch.topk(scores, 1, dim=1).indices[0]
if not top in self.priority_token_ids:
return True
return False
```
### Expected behavior
Since the function indicates `scores` as an input, I'd expect it to be a non-null value. | null | https://github.com/huggingface/transformers/pull/26863 | null | {'base_commit': '4b423e607455a7aca1edc4beaa713da58e78ef0b', 'files': [{'path': 'src/transformers/generation/stopping_criteria.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [26]}, "('StoppingCriteria', None, 36)": {'mod': [37]}}}, {'path': 'src/transformers/generation/utils.py', 'status': 'modified', 'Loc': {"('GenerationMixin', 'generate', 1351)": {'mod': [1400]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"src/transformers/generation/utils.py",
"src/transformers/generation/stopping_criteria.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.