organization string | repo_name string | base_commit string | iss_html_url string | iss_label string | title string | body string | code null | pr_html_url string | commit_html_url string | file_loc string | own_code_loc list | ass_file_loc list | other_rep_loc list | analysis dict | loctype dict | iss_has_pr int64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
home-assistant | core | b9753a9f920f002312dc115534afdb422043007c | https://github.com/home-assistant/core/issues/83852 | integration: homekit
integration: braviatv | Sony Bravia TV Integration: Error setting up entry for homekit | ### The problem
Starting in version homeassistant=='2012.11.0' Sony Bravia TV integration can't work with Apple HomeKit ( HomeKit Integration) due some errors:
2022-12-12 16:07:46.673 WARNING (MainThread) [homeassistant.components.homekit.type_remotes] media_player.sony_xbr_49x835d: Reached maximum number of sources (90)
2022-12-12 16:07:46.690 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry Sony XBR-49X835D:21066 for homekit
Traceback (most recent call last):
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/config_entries.py", line 372, in async_setup
result = await component.async_setup_entry(hass, self)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 344, in async_setup_entry
await homekit.async_start()
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 781, in async_start
if not await self._async_create_accessories():
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 959, in _async_create_accessories
acc = self._async_create_single_accessory(entity_states)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 894, in _async_create_single_accessory
acc = get_accessory(self.hass, self.driver, state, STANDALONE_AID, conf)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py", line 253, in get_accessory
return TYPES[a_type](hass, driver, name, state.entity_id, aid, config)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_media_players.py", line 223, in __init__
super().__init__(
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_remotes.py", line 133, in __init__
serv_input = self.add_preload_service(
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py", line 129, in add_preload_service
self.add_service(service)
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py", line 151, in add_service
self.iid_manager.assign(s)
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/iid_manager.py", line 31, in assign
iid = self.get_iid_for_obj(obj)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py", line 669, in get_iid_for_obj
raise RuntimeError(
RuntimeError: Cannot assign IID 79 to <service display_name=InputSource unique_id=Screen mirroring chars={'ConfiguredName': '', 'InputSourceType': 0, 'IsConfigured': 0, 'CurrentVisibilityState': 0, 'Identifier': 0, 'Name': ''}> as it is already in use by: <service display_name=InputSource unique_id=Screen mirroring chars={'ConfiguredName': 'Screen mirroring', 'InputSourceType': 0, 'IsConfigured': 1, 'CurrentVisibilityState': 0, 'Identifier': 6, 'Name': 'Screen mirroring'}>
### What version of Home Assistant Core has the issue?
2022.12.3
### What was the last working version of Home Assistant Core?
2022.10.5
### What type of installation are you running?
Home Assistant Core
### Integration causing the issue
Sony Bravia TV
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/braviatv/
### Diagnostics information
2022-12-12 16:07:46.673 WARNING (MainThread) [homeassistant.components.homekit.type_remotes] media_player.sony_xbr_49x835d: Reached maximum number of sources (90)
2022-12-12 16:07:46.690 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry Sony XBR-49X835D:21066 for homekit
Traceback (most recent call last):
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/config_entries.py", line 372, in async_setup
result = await component.async_setup_entry(hass, self)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 344, in async_setup_entry
await homekit.async_start()
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 781, in async_start
if not await self._async_create_accessories():
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 959, in _async_create_accessories
acc = self._async_create_single_accessory(entity_states)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 894, in _async_create_single_accessory
acc = get_accessory(self.hass, self.driver, state, STANDALONE_AID, conf)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py", line 253, in get_accessory
return TYPES[a_type](hass, driver, name, state.entity_id, aid, config)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_media_players.py", line 223, in __init__
super().__init__(
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_remotes.py", line 133, in __init__
serv_input = self.add_preload_service(
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py", line 129, in add_preload_service
self.add_service(service)
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py", line 151, in add_service
self.iid_manager.assign(s)
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/iid_manager.py", line 31, in assign
iid = self.get_iid_for_obj(obj)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py", line 669, in get_iid_for_obj
raise RuntimeError(
RuntimeError: Cannot assign IID 79 to <service display_name=InputSource unique_id=Screen mirroring chars={'ConfiguredName': '', 'InputSourceType': 0, 'IsConfigured': 0, 'CurrentVisibilityState': 0, 'Identifier': 0, 'Name': ''}> as it is already in use by: <service display_name=InputSource unique_id=Screen mirroring chars={'ConfiguredName': 'Screen mirroring', 'InputSourceType': 0, 'IsConfigured': 1, 'CurrentVisibilityState': 0, 'Identifier': 6, 'Name': 'Screen mirroring'}>
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
```txt
2022-12-12 16:07:46.673 WARNING (MainThread) [homeassistant.components.homekit.type_remotes] media_player.sony_xbr_49x835d: Reached maximum number of sources (90)
2022-12-12 16:07:46.690 ERROR (MainThread) [homeassistant.config_entries] Error setting up entry Sony XBR-49X835D:21066 for homekit
Traceback (most recent call last):
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/config_entries.py", line 372, in async_setup
result = await component.async_setup_entry(hass, self)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 344, in async_setup_entry
await homekit.async_start()
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 781, in async_start
if not await self._async_create_accessories():
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 959, in _async_create_accessories
acc = self._async_create_single_accessory(entity_states)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/__init__.py", line 894, in _async_create_single_accessory
acc = get_accessory(self.hass, self.driver, state, STANDALONE_AID, conf)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py", line 253, in get_accessory
return TYPES[a_type](hass, driver, name, state.entity_id, aid, config)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_media_players.py", line 223, in __init__
super().__init__(
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/type_remotes.py", line 133, in __init__
serv_input = self.add_preload_service(
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py", line 129, in add_preload_service
self.add_service(service)
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/accessory.py", line 151, in add_service
self.iid_manager.assign(s)
File "/opt/homeassistant/lib64/python3.9/site-packages/pyhap/iid_manager.py", line 31, in assign
iid = self.get_iid_for_obj(obj)
File "/opt/homeassistant/lib64/python3.9/site-packages/homeassistant/components/homekit/accessories.py", line 669, in get_iid_for_obj
raise RuntimeError(
RuntimeError: Cannot assign IID 79 to <service display_name=InputSource unique_id=Screen mirroring chars={'ConfiguredName': '', 'InputSourceType': 0, 'IsConfigured': 0, 'CurrentVisibilityState': 0, 'Identifier': 0, 'Name': ''}> as it is already in use by: <service display_name=InputSource unique_id=Screen mirroring chars={'ConfiguredName': 'Screen mirroring', 'InputSourceType': 0, 'IsConfigured': 1, 'CurrentVisibilityState': 0, 'Identifier': 6, 'Name': 'Screen mirroring'}>
```
### Additional information
HomeKit Integration
https://www.home-assistant.io/integrations/homekit/ | null | https://github.com/home-assistant/core/pull/83890 | null | {'base_commit': 'b9753a9f920f002312dc115534afdb422043007c', 'files': [{'path': 'homeassistant/components/homekit/type_remotes.py', 'status': 'modified', 'Loc': {"('RemoteInputSelectAccessory', None, 78)": {'add': [145]}, '(None, None, None)': {'mod': [21]}, "('RemoteInputSelectAccessory', '__init__', 81)": {'mod': [99]}, "('RemoteInputSelectAccessory', '_async_update_input_state', 159)": {'mod': [172]}}}, {'path': 'tests/components/homekit/test_type_media_players.py', 'status': 'modified', 'Loc': {"(None, 'test_media_player_television_max_sources', 460)": {'add': [514]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/homekit/type_remotes.py"
],
"doc": [],
"test": [
"tests/components/homekit/test_type_media_players.py"
],
"config": [],
"asset": []
} | 1 |
home-assistant | core | 185f7beafc05fc355109fd417350591459650366 | https://github.com/home-assistant/core/issues/59106 | integration: octoprint | Error adding entities for domain sensor with platform octoprint when no tool0 | ### The problem
One of my octoprint instances is connect to a CNC which does not have a tool0 as there is no extruder. In the previous integration, you could just not set it to monitor this, however, in the the new UI there is no option to ignore components. As a result, I have an "Octoprint target tool0 temp" that is listed as "unavailable" and receive the following errors in my log, which I believe are related.
```
Logger: homeassistant.components.sensor
Source: components/octoprint/sensor.py:215
Integration: Sensor (documentation, issues)
First occurred: 2:46:05 PM (2 occurrences)
Last logged: 2:46:05 PM
Error adding entities for domain sensor with platform octoprint
Error while setting up octoprint platform for sensor
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 382, in async_add_entities
await asyncio.gather(*tasks)
File "/usr/src/homeassistant/homeassistant/helpers/entity_platform.py", line 607, in _async_add_entity
await entity.add_to_platform_finish()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 715, in add_to_platform_finish
self.async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 486, in async_write_ha_state
self._async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 519, in _async_write_ha_state
state = self._stringify_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 492, in _stringify_state
if (state := self.state) is None:
File "/usr/src/homeassistant/homeassistant/components/sensor/__init__.py", line 273, in state
value = self.native_value
File "/usr/src/homeassistant/homeassistant/components/octoprint/sensor.py", line 215, in native_value
return round(
TypeError: type NoneType doesn't define __round__ method
```
and
```
Logger: homeassistant
Source: components/octoprint/sensor.py:215
First occurred: 2:46:34 PM (36 occurrences)
Last logged: 3:04:04 PM
Error doing job: Task exception was never retrieved
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 134, in _handle_refresh_interval
await self._async_refresh(log_failures=True, scheduled=True)
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 265, in _async_refresh
update_callback()
File "/usr/src/homeassistant/homeassistant/helpers/update_coordinator.py", line 325, in _handle_coordinator_update
self.async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 486, in async_write_ha_state
self._async_write_ha_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 519, in _async_write_ha_state
state = self._stringify_state()
File "/usr/src/homeassistant/homeassistant/helpers/entity.py", line 492, in _stringify_state
if (state := self.state) is None:
File "/usr/src/homeassistant/homeassistant/components/sensor/__init__.py", line 273, in state
value = self.native_value
File "/usr/src/homeassistant/homeassistant/components/octoprint/sensor.py", line 215, in native_value
return round(
TypeError: type NoneType doesn't define __round__ method
```
We should either have the option to ignore certain monitored components or have it at the very least handle this scenario gracefully.
### What version of Home Assistant Core has the issue?
core-2021.11.0
### What was the last working version of Home Assistant Core?
2021.10.x
### What type of installation are you running?
Home Assistant Container
### Integration causing the issue
Octoprint
### Link to integration documentation on our website
https://www.home-assistant.io/integrations/octoprint
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
_No response_
### Additional information
_No response_ | null | https://github.com/home-assistant/core/pull/59130 | null | {'base_commit': '185f7beafc05fc355109fd417350591459650366', 'files': [{'path': 'homeassistant/components/octoprint/sensor.py', 'status': 'modified', 'Loc': {"('OctoPrintTemperatureSensor', 'native_value', 206)": {'add': [219], 'mod': [214, 217, 218]}}}, {'path': 'tests/components/octoprint/test_sensor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "(None, 'test_sensors', 8)": {'add': [76], 'mod': [27]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/octoprint/sensor.py"
],
"doc": [],
"test": [
"tests/components/octoprint/test_sensor.py"
],
"config": [],
"asset": []
} | 1 |
home-assistant | core | dbaca51bb3b7b0cea2acd5d3cc6fd1b7a396daf9 | https://github.com/home-assistant/core/issues/45426 | integration: synology_dsm | Synology DSM CPU sensors report usage above 100% | ## The problem
CPU load for 15 and 5 minutes are reported above 100%

## Environment
Running version 2021.1.4 as Home Assistant OS VM running on the synology nas itself
## Problem-relevant `configuration.yaml`
No configuration file edited, everything done via UI
## Traceback/Error logs
none
## Additional information
| null | https://github.com/home-assistant/core/pull/45500 | null | {'base_commit': 'dbaca51bb3b7b0cea2acd5d3cc6fd1b7a396daf9', 'files': [{'path': 'homeassistant/components/synology_dsm/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [38], 'mod': [97, 104, 111, 118, 125, 126, 132, 133, 139, 140]}}}, {'path': 'homeassistant/components/synology_dsm/sensor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21]}, "('SynoDSMUtilSensor', 'state', 75)": {'add': [90]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/synology_dsm/const.py",
"homeassistant/components/synology_dsm/sensor.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
home-assistant | core | ed3ebdfea52b222560ee6cae21c84f1e73df4d9a | https://github.com/home-assistant/core/issues/97324 | integration: renault | Error setting up entry Renault for renault | ### The problem
Renault integration fails to start
### What version of Home Assistant Core has the issue?
core-2023.7.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
_No response_
### Anything in the logs that might be useful for us?
```txt
Logger: homeassistant.config_entries
Source: components/renault/renault_hub.py:59
First occurred: 10:45:13 (1 occurrences)
Last logged: 10:45:13
Error setting up entry Renault for renault
Traceback (most recent call last):
File "/usr/src/homeassistant/homeassistant/config_entries.py", line 390, in async_setup
result = await component.async_setup_entry(hass, self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/src/homeassistant/homeassistant/components/renault/__init__.py", line 29, in async_setup_entry
await renault_hub.async_initialise(config_entry)
File "/usr/src/homeassistant/homeassistant/components/renault/renault_hub.py", line 59, in async_initialise
vehicles = await self._account.get_vehicles()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/renault_api/renault_account.py", line 62, in get_vehicles
return await self.session.get_account_vehicles(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/renault_api/renault_session.py", line 188, in get_account_vehicles
return await kamereon.get_account_vehicles(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/renault_api/kamereon/__init__.py", line 239, in get_account_vehicles
await request(
File "/usr/local/lib/python3.11/site-packages/renault_api/kamereon/__init__.py", line 152, in request
http_response.raise_for_status()
File "/usr/local/lib/python3.11/site-packages/aiohttp/client_reqrep.py", line 1005, in raise_for_status
raise ClientResponseError(
aiohttp.client_exceptions.ClientResponseError: 504, message='Gateway Time-out', url=URL('https://api-wired-prod-1-euw1.wrd-aws.com/commerce/v1/accounts/31451f9e-34a5-45ea-83f3-e10f0e5a905e/vehicles?country=FR')
```
```
### Additional information
_No response_ | null | https://github.com/home-assistant/core/pull/97530 | null | {'base_commit': 'ed3ebdfea52b222560ee6cae21c84f1e73df4d9a', 'files': [{'path': 'homeassistant/components/renault/__init__.py', 'status': 'modified', 'Loc': {"(None, 'async_setup_entry', 15)": {'mod': [29]}}}, {'path': 'tests/components/renault/test_init.py', 'status': 'modified', 'Loc': {"(None, 'test_setup_entry_exception', 63)": {'add': [78]}, '(None, None, None)': {'mod': [4]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/renault/__init__.py"
],
"doc": [],
"test": [
"tests/components/renault/test_init.py"
],
"config": [],
"asset": []
} | 1 |
home-assistant | core | 0eae0cca2bf841f2c2cb87fc602bc8afa3557174 | https://github.com/home-assistant/core/issues/35196 | integration: metoffice | Met office componant does not provide future forecast data | <!-- READ THIS FIRST:
- If you need additional help with this template, please refer to https://www.home-assistant.io/help/reporting_issues/
- Make sure you are running the latest version of Home Assistant before reporting an issue: https://github.com/home-assistant/core/releases
- Do not report issues for integrations if you are using custom components or integrations.
- Provide as many details as possible. Paste logs, configuration samples and code into the backticks.
DO NOT DELETE ANY TEXT from this template! Otherwise, your issue may be closed without comment.
-->
## The problem
The met office weather componant does not provide a 5 day forecast in Home Assistant in the same way other weather integrations do (i.e. darkSky) even though the API is capable of returning 5 day forecast data.
## Environment
<!--
Provide details about the versions you are using, which helps us to reproduce
and find the issue quicker. Version information is found in the
Home Assistant frontend: Developer tools -> Info.
-->
- Home Assistant Core release with the issue: 0.108.9
- Last working Home Assistant Core release (if known):
- Operating environment (Home Assistant/Supervised/Docker/venv): HassOS VM, Supervisor 220
- Integration causing this issue: Met Office
- Link to integration documentation on our website: https://www.home-assistant.io/integrations/metoffice/
## Problem-relevant `configuration.yaml`
<!--
An example configuration that caused the problem for you. Fill this out even
if it seems unimportant to you. Please be sure to remove personal information
like passwords, private URLs and other credentials.
-->
```yaml
weather:
- platform: metoffice
api_key: !secret api_metoffice
latitude: !secret metoffice_lat
longitude: !secret metoffice_lon
```
## Traceback/Error logs
<!--
If you come across any trace or error logs, please provide them.
-->
```txt
```
## Additional information
It may be worth noting that the Met Office have launched a new API called DataHub which will eventually replace the current DataPoint API
https://metoffice.apiconnect.ibmcloud.com/metoffice/production/
| null | https://github.com/home-assistant/core/pull/50876 | null | {'base_commit': '0eae0cca2bf841f2c2cb87fc602bc8afa3557174', 'files': [{'path': 'homeassistant/components/metoffice/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 4, 16, 18], 'mod': [14, 15]}, "(None, 'async_setup_entry', 25)": {'add': [50], 'mod': [33, 34, 35, 38, 41, 42, 48, 49, 54, 55, 56]}}}, {'path': 'homeassistant/components/metoffice/config_flow.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3, 6], 'mod': [11]}, "(None, 'validate_input', 16)": {'mod': [25, 26, 27, 30]}}}, {'path': 'homeassistant/components/metoffice/const.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33], 'mod': [28, 29]}}}, {'path': 'homeassistant/components/metoffice/data.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 5, 7, 9]}, "('MetOfficeData', None, 12)": {'mod': [13, 15, 16, 17, 18, 20]}, "('MetOfficeData', '__init__', 20)": {'mod': [22, 23, 24, 26, 27, 28, 30, 31, 32, 33, 35, 36, 37, 39, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 52, 53, 54, 55, 56, 57, 59, 61, 62, 63, 65, 66, 67, 68, 69, 71, 72, 73, 74, 75, 76, 77, 78]}}}, {'path': 'homeassistant/components/metoffice/manifest.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'homeassistant/components/metoffice/sensor.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14, 22], 'mod': [13, 20, 21]}, "(None, 'async_setup_entry', 80)": {'mod': [88]}, "('MetOfficeCurrentSensor', None, 95)": {'mod': [95, 98, 186, 187, 188, 189, 190, 191, 193, 194, 195, 197, 198, 199, 200, 201, 202, 203, 205, 206, 207, 208]}, "('MetOfficeCurrentSensor', '__init__', 98)": {'mod': [100, 101, 104, 105, 107, 108, 109]}, "('MetOfficeCurrentSensor', 'state', 122)": {'mod': [127, 129, 131, 132, 134, 138, 141, 142]}, "('MetOfficeCurrentSensor', 'extra_state_attributes', 174)": {'mod': [178, 180, 181, 182, 183]}, "('MetOfficeCurrentSensor', 'entity_registry_enabled_default', 211)": {'mod': [213, 215, 216, 217, 218]}}}, {'path': 'homeassistant/components/metoffice/weather.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 14], 'mod': [2, 4, 12, 13]}, "(None, 'async_setup_entry', 20)": {'mod': [28, 29, 30, 31]}, "('MetOfficeWeather', None, 37)": {'mod': [37, 40, 141, 142, 143, 144, 145, 146, 148, 149, 150, 151, 152, 154, 155, 156, 157, 159, 160, 161, 162]}, "('MetOfficeWeather', '__init__', 40)": {'mod': [42, 43, 45, 46, 48]}, "('MetOfficeWeather', 'condition', 61)": {'mod': [63, 64, 65, 66, 67, 68, 69, 70, 71]}, "('MetOfficeWeather', 'temperature', 74)": {'mod': [76, 77, 78, 79, 80]}, "('MetOfficeWeather', 'visibility', 88)": {'mod': [91, 92]}, "('MetOfficeWeather', 'pressure', 101)": {'mod': [103, 104, 105, 106, 107]}, "('MetOfficeWeather', 'humidity', 110)": {'mod': [112, 113, 114, 115, 116]}, "('MetOfficeWeather', 'wind_speed', 119)": {'mod': [121, 122, 123, 124, 125]}, "('MetOfficeWeather', 'wind_bearing', 128)": {'mod': [130, 131, 132, 133, 134]}}}, {'path': 'requirements_all.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [476]}}}, {'path': 'requirements_test_all.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [270]}}}, {'path': 'tests/components/metoffice/test_config_flow.py', 'status': 'modified', 'Loc': {"(None, 'test_form_already_configured', 56)": {'add': [70]}}}, {'path': 'tests/components/metoffice/test_sensor.py', 'status': 'modified', 'Loc': {"(None, 'test_one_sensor_site_running', 26)": {'add': [31, 37]}, "(None, 'test_two_sensor_sites_running', 68)": {'add': [74, 75, 80, 83]}}}, {'path': 'tests/components/metoffice/test_weather.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11, 119]}, "(None, 'test_site_cannot_connect', 24)": {'add': [28], 'mod': [38, 40]}, "(None, 'test_site_cannot_update', 49)": {'add': [55, 60, 73], 'mod': [70, 79]}, "(None, 'test_one_weather_site_running', 87)": {'add': [93, 99], 'mod': [109, 110]}, "(None, 'test_two_weather_sites_running', 125)": {'add': [131, 132, 137, 140, 176], 'mod': [156, 157, 167, 168]}}}, {'path': 'tests/fixtures/metoffice.json', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1497]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"homeassistant/components/metoffice/weather.py",
"homeassistant/components/metoffice/sensor.py",
"tests/fixtures/metoffice.json",
"homeassistant/components/metoffice/const.py",
"homeassistant/components/metoffice/data.py",
"homeassistant/components/metoffice/config_flow.py",
"homeassistant/components/metoffice/__init__.py",
"homeassistant/components/metoffice/manifest.json"
],
"doc": [],
"test": [
"tests/components/metoffice/test_weather.py",
"tests/components/metoffice/test_sensor.py",
"tests/components/metoffice/test_config_flow.py"
],
"config": [
"requirements_test_all.txt",
"requirements_all.txt"
],
"asset": []
} | 1 |
zylon-ai | private-gpt | 5a695e9767e24778ffd725ab195bf72916e27ba5 | https://github.com/zylon-ai/private-gpt/issues/133 | Need help with ingest.py | Running into this error - python ingest.py
-Traceback (most recent call last):
File "C:\Users\krstr\OneDrive\Desktop\privategpt\privateGPT\privateGPT\ingest.py", line 11, in <module>
from constants import CHROMA_SETTINGS
File "C:\Users\krstr\OneDrive\Desktop\privategpt\privateGPT\privateGPT\constants.py", line 11, in <module>
CHROMA_SETTINGS = Settings(
File "pydantic\env_settings.py", line 39, in pydantic.env_settings.BaseSettings.__init__
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Settings
persist_directory
none is not an allowed value (type=type_error.none.not_allowed) -
I've installed the requirements and changed the .env file and followed the readme up to this point. Seeing some people solve but not answer what fixed the above errors. Help? | null | https://github.com/zylon-ai/private-gpt/pull/168 | null | {'base_commit': '5a695e9767e24778ffd725ab195bf72916e27ba5', 'files': [{'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"requirements.txt"
],
"asset": []
} | 1 | |
zylon-ai | private-gpt | 57a829a8e8cf5c31410c256ae59e0eda9f129a41 | https://github.com/zylon-ai/private-gpt/issues/1258 | Add a list of supported file types to README and Docs | Maybe I'm blind, but I couldn't find a list of the file types supported by privateGPT.
One might add a list with the supported file types to the [README.md](https://github.com/imartinez/privateGPT/blob/main/README.md) and [PrivateGPT Docs](https://docs.privategpt.dev/).
Kinda related https://github.com/imartinez/privateGPT/issues/451 and apologize at this place, I haven't had the time yet to look further into a first implementation proposal. | null | https://github.com/zylon-ai/private-gpt/pull/1264 | null | {'base_commit': '57a829a8e8cf5c31410c256ae59e0eda9f129a41', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49]}}}, {'path': 'fern/docs.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 6], 'mod': [8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 30]}}}, {'path': 'fern/docs/pages/sdks.mdx', 'status': 'renamed', 'Loc': {'(None, None, None)': {'mod': [1]}}}, {'path': 'fern/docs/pages/ingestion.mdx', 'status': 'removed', 'Loc': {}}, {'path': 'fern/docs/pages/installation.mdx', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [131, 135, 208], 'mod': [12, 13, 14, 41, 46, 51, 53, 55, 56, 58, 60, 61, 62, 64, 65, 66, 67, 69, 70, 72, 74, 75, 77, 79, 80, 81, 83, 84, 85, 86, 125, 129, 130, 160, 197, 225, 227]}}}, {'path': 'fern/docs/pages/welcome.mdx', 'status': 'renamed', 'Loc': {'(None, None, None)': {'mod': [3, 41]}}}, {'path': 'fern/docs/pages/quickstart.mdx', 'status': 'removed', 'Loc': {}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"fern/docs.yml",
"fern/docs/pages/welcome.mdx",
"fern/docs/pages/quickstart.mdx",
"fern/docs/pages/sdks.mdx",
"fern/docs/pages/ingestion.mdx",
"fern/docs/pages/installation.mdx"
],
"test": [],
"config": [
"Makefile"
],
"asset": []
} | 1 | |
zylon-ai | private-gpt | 60e6bd25eb7e54a6d62ab0a9642c09170c1729e3 | https://github.com/zylon-ai/private-gpt/issues/448 | bug
primordial | ingest.py extracts only the first row from the CSV files | My suggestion for fixing the bug:
1. Modify the load_single_document function as follows:
def load_single_document(file_path: str) -> List[Document]:
ext = "." + file_path.rsplit(".", 1)[-1]
if ext in LOADER_MAPPING:
loader_class, loader_args = LOADER_MAPPING[ext]
loader = loader_class(file_path, **loader_args)
return loader.load()
raise ValueError(f"Unsupported file extension '{ext}'")
2. Modify the load_documents function as follows:
def load_documents(source_dir: str, ignored_files: List[str] = []) -> List[Document]:
"""
Loads all documents from the source documents directory, ignoring specified files
"""
all_files = []
for ext in LOADER_MAPPING:
all_files.extend(
glob.glob(os.path.join(source_dir, f"**/*{ext}"), recursive=True)
)
filtered_files = [file_path for file_path in all_files if file_path not in ignored_files]
with Pool(processes=os.cpu_count()) as pool:
results = []
with tqdm(total=len(filtered_files), desc='Loading new documents', ncols=80) as pbar:
for i, docs in enumerate(pool.imap_unordered(load_single_document, filtered_files)):
results.extend(docs)
pbar.update()
return results | null | https://github.com/zylon-ai/private-gpt/pull/560 | null | {'base_commit': '60e6bd25eb7e54a6d62ab0a9642c09170c1729e3', 'files': [{'path': 'ingest.py', 'status': 'modified', 'Loc': {"(None, 'load_single_document', 84)": {'mod': [84, 89]}, "(None, 'load_documents', 94)": {'mod': [108, 109]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"ingest.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
zylon-ai | private-gpt | 86c2dcfe1b33ac467558487a1df408abee0d2321 | https://github.com/zylon-ai/private-gpt/issues/875 | bug | I got a Traceback error while running privateGPT on Ubuntu 22.04 | While running privateGPT.py, the error started after "gptj_model_load: model size = 3609.38 MB / num tensors = 285". The error reads as follows:
Traceback (most recent call last):
File "/home/dennis/privateGPT/privateGPT.py", line 83, in <module>
main()
File "/home/dennis/privateGPT/privateGPT.py", line 38, in main
llm = GPT4All(model=model_path, n_ctx=model_n_ctx, backend='gptj', n_batch=model_n_batch, callbacks=callbacks, verbose=False)
File "/home/dennis/.local/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for GPT4All
n_ctx
extra fields not permitted (type=value_error.extra)
I have no idea what's happening here. Could anyone be able to fix it so that I can try privateGPT on my Ubuntu 22.04 on an old iMac late 2012?
| null | https://github.com/zylon-ai/private-gpt/pull/881 | null | {'base_commit': '86c2dcfe1b33ac467558487a1df408abee0d2321', 'files': [{'path': 'privateGPT.py', 'status': 'modified', 'Loc': {"(None, 'main', 25)": {'mod': [36, 38]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"privateGPT.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
zylon-ai | private-gpt | fdb45741e521d606b028984dbc2f6ac57755bb88 | https://github.com/zylon-ai/private-gpt/issues/15 | llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this | llama.cpp: loading model from ./models/ggml-model-q4_0.bin
llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this
llama_model_load_internal: format = 'ggml' (old version with low tokenizer quality and no mmap support)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 512
llama_model_load_internal: n_embd = 4096
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 32
llama_model_load_internal: n_layer = 32
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 11008
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 7B
llama_model_load_internal: ggml ctx size = 4113739.11 KB
llama_model_load_internal: mem required = 5809.32 MB (+ 2052.00 MB per state)
...................................................................................................
I am using a recommended model, but I get this error message. How do you think I could solve it? | null | https://github.com/zylon-ai/private-gpt/pull/224 | null | {'base_commit': 'fdb45741e521d606b028984dbc2f6ac57755bb88', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4, 15, 17, 23, 25, 28, 58, 62, 86]}}}, {'path': 'example.env', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [2]}}}, {'path': 'ingest.py', 'status': 'modified', 'Loc': {"(None, 'main', 71)": {'add': [79], 'mod': [75, 76, 81, 84, 87, 90]}, '(None, None, None)': {'mod': [22]}}}, {'path': 'privateGPT.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 11]}, "(None, 'main', 20)": {'mod': [21, 22]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"ingest.py",
"privateGPT.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [
"example.env"
],
"asset": []
} | 1 | |
yt-dlp | yt-dlp | c999bac02c5a4f755b2a82488a975e91c988ffd8 | https://github.com/yt-dlp/yt-dlp/issues/9506 | site-bug | [TikTok] Failed to parse JSON/ No video formats found | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
#### EDIT:
yt-dlp's TikTok extractor is failing to parse JSON from the feed API endpoint even on nightly/master or with passing `--extractor-args "tiktok:api_hostname=api22-normal-c-useast2a.tiktokv.com"`
<details><summary>original log for reference</summary>
```shell
yt-dlp -f "bv*[vcodec^=avc]+ba[ext=m4a]/b[ext=mp4]/b" https://www.tiktok.com/@pouveronica/video/7322479967147740459
[TikTok] Extracting URL: https://www.tiktok.com/@pouveronica/video/7322479967147740459
[TikTok] 7322479967147740459: Downloading video feed
WARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 1 of 4)
[TikTok] 7322479967147740459: Downloading video feed
WARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 2 of 4)
[TikTok] 7322479967147740459: Downloading video feed
WARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 3 of 4)
[TikTok] 7322479967147740459: Downloading video feed
WARNING: [TikTok] 7322479967147740459: Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)")); trying with webpage
[TikTok] 7322479967147740459: Downloading webpage
[info] 7322479967147740459: Downloading 1 format(s): download
ERROR: unable to open for writing: [Errno 2] No such file or directory: 'Replying to @Vy Puthny some key differences in finance and accounting 😃 #hr #humanresources #hrinsight #hrrole #hrtips #hrtrend #hrknowledge #learning #careergrowth #accounting #finance #manpoweroutsourcing #eor @Nica - និកា [7322479967147740459].mp4.part'
```
</details>
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-v', '-U', '-o', '%(title).200B.%(ext)s', 'https://www.tiktok.com/@mix_editor_5/video/7342789941371571462']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version master@2024.03.20.232831 from yt-dlp/yt-dlp-master-builds [07f5b2f75] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 6.1.1-full_build-www.gyan.dev (setts), ffprobe 6.1.1-full_build-www.gyan.dev, phantomjs 2.5.0, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.31.0, sqlite3-3.35.5, urllib3-2.2.1, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1806 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-master-builds/releases/latest
Latest version: master@2024.03.20.232831 from yt-dlp/yt-dlp-master-builds
yt-dlp is up to date (master@2024.03.20.232831 from yt-dlp/yt-dlp-master-builds)
[TikTok] Extracting URL: https://www.tiktok.com/@mix_editor_5/video/7342789941371571462
[TikTok] 7342789941371571462: Downloading video feed
WARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 1 of 4)
[TikTok] 7342789941371571462: Downloading video feed
WARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 2 of 4)
[TikTok] 7342789941371571462: Downloading video feed
WARNING: [TikTok] Expecting value in '': line 1 column 1 (char 0). Retrying... (attempt 3 of 4)
[TikTok] 7342789941371571462: Downloading video feed
WARNING: [TikTok] 7342789941371571462: Failed to parse JSON (caused by JSONDecodeError("Expecting value in '': line 1 column 1 (char 0)")); trying with webpage
[TikTok] 7342789941371571462: Downloading webpage
[debug] [TikTok] Found universal data for rehydration
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] 7342789941371571462: Downloading 1 format(s): download
[debug] Invoking http downloader on "https://v16-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-ve-0068c004/okMCLjLWAqjFQ5CIXaAfaAiMNgbSzfFCh48fSV/?a=1988&ch=0&cr=3&dr=0&lr=tiktok_m&cd=0%7C0%7C1%7C&cv=1&br=1800&bt=900&bti=ODszNWYuMDE6&cs=0&ds=3&ft=4fUEKMFx8Zmo0H.5Y94jV..7rpWrKsd.&mime_type=video_mp4&qs=0&rc=NGQ5OTY1NTdnaDM0Ojs1ZUBpMzs4N3Q5cnlncTMzNzczM0AzLi9gMi4vNjUxX14uLV4yYSNqZWpoMmQ0NWdgLS1kMTZzcw%3D%3D&btag=e00088000&expire=1711056619&l=20240321153003FA72D5DD8E2EFA514E6F&ply_type=2&policy=2&signature=eb207c9a24f5509f1e4668cbac840d00&tk=tt_chain_token"
[debug] File locking is not supported. Proceeding without locking
[download] Destination: #CapCut #🥺💔 #new #trending #plz #plz #😭😭 #viralvideo #plunfrezzmyaccount🙏🥺 #plzvirulvideo😥 #plzviral🥺🥺🙏🙏foryoupage ⧸⧸ 𝑫𝒆𝒂𝒓 𝑻𝒊𝒌𝒕𝒐𝒌 𝑻.mp4
[download] 100% of 1.62MiB in 00:00:00 at 14.79MiB/s
``` | null | https://github.com/yt-dlp/yt-dlp/pull/9960 | null | {'base_commit': '3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4', 'files': [{'path': 'yt_dlp/extractor/tiktok.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [21]}, "('TikTokBaseIE', None, 33)": {'add': [241]}, "('TikTokBaseIE', '_parse_aweme_video_app', 242)": {'add': [298], 'mod': [246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 315, 316]}, "('TikTokBaseIE', '_parse_aweme_video_web', 412)": {'add': [422, 425, 429, 433, 442, 453], 'mod': [427, 436, 437, 438, 457, 472, 473, 474, 475, 476]}, "('TikTokBaseIE', 'extract_addr', 272)": {'mod': [273, 287]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/tiktok.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4 | https://github.com/yt-dlp/yt-dlp/issues/2350 | site-enhancement | [YouTube] [ChannelTab] extract subscriber count and channel views | ### Checklist
- [X] I'm reporting a site feature request
- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Example URLs
https://www.youtube.com/channel/UCR1IuLEqb6UEA_zQ81kwXfg
### Description
I have implemented a scraper with BeautifulSoup to extract some additional metadata from YouTube Channel for a [project](https://github.com/bbilly1/tubearchivist/blob/7028621bc576936c1b9808336b481a00252ab997/tubearchivist/home/src/index.py#L88) of mine. I was wondering if there would be interest to integrate that into yt-dlp? Both of these fields are extractable from the page without an API call.
- Channel Subscribers: This information is available in the ytInitialData script in header -> c4TabbedHeaderRenderer -> subscriberCountText
- The number is unfortunately truncated and as a string, e.g. "2.03M subscribers"
- As far as I have observed the unit can be *M* for *millions*, *K* for *thousands* and none for below 1000.
- That is language specific, but as far as I know yt-dlp defaults to english already?
- Channel Views: This information is in ytInitialData at itemSectionRenderer -> contents -> viewCountText.
- This is as a string and will need some logic to extract the numbers.
Additionally, for extracting banners there is already an issue open: #2237.
This would be a great addition to have upstream directly in yt-dlp.
### Verbose log
```shell
Does not apply...
```
| null | https://github.com/yt-dlp/yt-dlp/pull/2399 | null | {'base_commit': '135dfa2c7ebc9284db940713c0dc6cbc19ca5fa4', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1140]}}}, {'path': 'yt_dlp/extractor/common.py', 'status': 'modified', 'Loc': {"('InfoExtractor', None, 94)": {'add': [262]}}}, {'path': 'yt_dlp/extractor/youtube.py', 'status': 'modified', 'Loc': {"('YoutubeIE', None, 852)": {'add': [1034, 1077, 1129, 1161, 1188, 1215, 1246, 1284, 1316, 1347, 1515, 1573, 1604, 1667, 1776, 1831, 1864, 1908, 1943, 1969, 2010, 2053]}, "('YoutubeTabIE', None, 4200)": {'add': [4238, 4254, 4270, 4286, 4339, 4355, 4371, 4387, 4403, 4419, 4436, 4617, 4798, 4818], 'mod': [4596, 4607, 4612, 4614]}, "('YoutubeBaseInfoExtractor', '_extract_visitor_data', 511)": {'mod': [517]}, "('YoutubeIE', '_real_extract', 3118)": {'mod': [3490]}, "('YoutubeTabBaseInfoExtractor', '_extract_from_tabs', 3894)": {'mod': [3943]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/youtube.py",
"yt_dlp/extractor/common.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | c8a61a910096c77ce08dad5e1b2fbda5eb964156 | https://github.com/yt-dlp/yt-dlp/issues/9635 | site-bug | Vkplay Unsupported URL | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm asking a question and **not** reporting a bug or requesting a feature
- [X] I've looked through the [README](https://github.com/yt-dlp/yt-dlp#readme)
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar questions **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Please make sure the question is worded well enough to be understood
WARNING: [generic] Falling back on generic information extractor
[generic] records: Extracting information
ERROR: Unsupported URL:
[in#0 @ 00000274e3899b80] Error opening input: Invalid data found when processing input
Error opening input file -.
Error opening input files: Invalid data found when processing input
Download is not working anymore?
### Provide verbose output that clearly demonstrates the problem
- [ ] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [ ] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
_No response_ | null | https://github.com/yt-dlp/yt-dlp/pull/9636 | null | {'base_commit': 'c8a61a910096c77ce08dad5e1b2fbda5eb964156', 'files': [{'path': 'yt_dlp/extractor/vk.py', 'status': 'modified', 'Loc': {"('VKPlayBaseIE', None, 709)": {'add': [709]}, "('VKPlayIE', None, 767)": {'add': [785], 'mod': [768, 779]}, "('VKPlayLiveIE', None, 804)": {'add': [824], 'mod': [805, 816]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/vk.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 93864403ea7c982be9a78af38835ac0747ed12d1 | https://github.com/yt-dlp/yt-dlp/issues/2043 | bug
external issue | [ceskatelevize.cz] Cannot download manifest - SSLV3_ALERT_HANDSHAKE_FAILURE | I'm sorry, but I think that the extractor is still broken. For instance:
```
$ yt-dlp --verbose "https://www.ceskatelevize.cz/porady/10095426857-interview-ct24/221411058041217/"
[debug] Command-line config: ['--verbose', 'https://www.ceskatelevize.cz/porady/10095426857-interview-ct24/221411058041217/']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8
[debug] yt-dlp version 2021.12.01 [91f071af6]
[debug] Python version 3.9.9 (CPython 64bit) - Linux-5.15.5-gentoo-x86_64-x86_64-AMD_Ryzen_9_3900X_12-Core_Processor-with-glibc2.33
[debug] exe versions: ffmpeg 4.4.1 (setts), ffprobe 4.4.1
[debug] Optional libraries: Crypto, sqlite
[debug] Proxy map: {}
[debug] [CeskaTelevize] Extracting URL: https://www.ceskatelevize.cz/porady/10095426857-interview-ct24/221411058041217/
[CeskaTelevize] 221411058041217: Downloading webpage
[CeskaTelevize] 221411058041217: Downloading webpage
[CeskaTelevize] 221411058041217: Downloading webpage
[CeskaTelevize] 221411058041217: Downloading JSON metadata
[CeskaTelevize] 221411058041217: Downloading JSON metadata
[CeskaTelevize] 221411058041217: Downloading MPD manifest
WARNING: [CeskaTelevize] Failed to download MPD manifest: <urlopen error [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1145)>
[CeskaTelevize] 221411058041217: Downloading JSON metadata
[CeskaTelevize] 221411058041217: Downloading JSON metadata
[CeskaTelevize] 221411058041217: Downloading m3u8 information
WARNING: [CeskaTelevize] Failed to download m3u8 information: <urlopen error [SSL: SSLV3_ALERT_HANDSHAKE_FAILURE] sslv3 alert handshake failure (_ssl.c:1145)>
[download] Downloading playlist: 17. prosinec - Interview ČT24 | Česká televize
[CeskaTelevize] playlist 17. prosinec - Interview ČT24 | Česká televize: Collected 1 videos; downloading 1 of them
[download] Downloading video 1 of 1
ERROR: [CeskaTelevize] 61924494877975106: No video formats found!; please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; see https://github.com/yt-dlp/yt-dlp on how to update. Be sure to call yt-dlp with the --verbose flag and include its complete output.
```
_Originally posted by @zippy2 in https://github.com/yt-dlp/yt-dlp/issues/1899#issuecomment-997226548_ | null | https://github.com/yt-dlp/yt-dlp/pull/1904 | null | {'base_commit': '93864403ea7c982be9a78af38835ac0747ed12d1', 'files': [{'path': 'yt_dlp/extractor/ceskatelevize.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [15, 16]}, "('CeskaTelevizeIE', '_real_extract', 89)": {'mod': [102, 103, 104, 105, 106]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/ceskatelevize.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 195c22840c594c8f9229cb47ffec2a8984c53a0c | https://github.com/yt-dlp/yt-dlp/issues/2239 | bug | --no-continue is bugged and does nothing (--force-overwrites also) | ### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2021.12.27**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Description
I originally posted this issue months ago in Discord but didn't create an issue for it.
--no-continue doesn't seem to do anything right now.
--force-overwrites which is meant to include it doesn't either.
The only reason I want this to work is to workaround https://github.com/yt-dlp/yt-dlp/issues/2001 .
--force-overwrites log: https://pastebin.com/raw/PZ03eWb2
### Verbose log
```shell
[debug] Command-line config: ['-v', 'https://www.funimation.com/v/k-on/disband-the-club', '--config-location', 'funimation.conf', '--exec', 'start /B yt-dlp --config-location funimation.conf -q --fixup force --embed-subs --load-info-json %(__infojson_filename)q', '--write-subs', '--download-archive', 'archive.txt', '--ffmpeg-location', 'D:\\dummy', '--write-info-json', '-P', 'D:\\Temp']
[debug] | Config "funimation.conf": ['--config-location', 'base.conf', '-f', '(bv*+ba/b)[format_note=Uncut] / (bv*+ba/b)', '-n', '--cookies', 'cookies-funimation-com.txt', '--extractor-args', 'funimation:language=english', '--no-continue']
[debug] | | Config "base.conf": ['-o', '%(extractor)s\\%(title)s%(myindex)s.%(ext)s', '-P', 'D:\\Videos', '-P', 'temp:D:\\Temp', '--parse-metadata', 'original_url:#%(playlist_index)s', '--parse-metadata', ' - %(playlist_index)d:^(?P<myindex> - \\d+)$', '--parse-metadata', '%(series)s - S%(season_number)sE%(episode_number)s - %(episode)s:^(?P<title>.+ - S\\d+E\\d+ - \\S+.*)$', '--output-na', '', '--sub-langs', 'enUS,en', '--user-agent', 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:95.0) Gecko/20100101 Firefox/95.0', '--fragment-retries', '500', '-N', '5', '-R', '0', '--no-mtime']
[debug] Encodings: locale cp1252, fs utf-8, out utf-8, err utf-8, pref cp1252
[debug] yt-dlp version 2021.12.27 [6223f67a8] (win_exe)
[debug] Lazy loading extractors is disabled
[debug] Python version 3.10.1 (CPython 64bit) - Windows-10-10.0.19044-SP0
WARNING: ffmpeg-location D:\dummy does not exist! Continuing without ffmpeg.
[debug] exe versions: none
[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets
[debug] Proxy map: {}
[debug] Loading archive file 'archive.txt'
[funimation:page] Logging in
[debug] [funimation:page] Extracting URL: https://www.funimation.com/v/k-on/disband-the-club
[funimation:page] k-on_disband-the-club: Downloading JSON metadata
[debug] [Funimation] Extracting URL: https://www.funimation.com/player/1135013
[Funimation] 1135013: Downloading player webpage for 1135013
[Funimation] disband-the-club: Downloading Uncut english (1135013) JSON
[Funimation] disband-the-club: Downloading Uncut english (1135013) m3u8 information
[debug] Sort order given by extractor: lang, source
[debug] Formats sorted by: hasvid, ie_pref, lang, source, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, id
[debug] Searching for '\\#(?P<playlist_index>.+)' in '%(original_url)s'
WARNING: Could not interpret 'original_url' as '#%(playlist_index)s'
[debug] Searching for '^(?P<myindex> - \\d+)$' in ' - %(playlist_index)d'
WARNING: Could not interpret ' - %(playlist_index)d' as '^(?P<myindex> - \\d+)$'
[debug] Searching for '^(?P<title>.+ - S\\d+E\\d+ - \\S+.*)$' in '%(series)s - S%(season_number)sE%(episode_number)s - %(episode)s'
[MetadataParser] Parsed title from '%(series)s - S%(season_number)sE%(episode_number)s - %(episode)s': 'K-On! - S1E1 - Disband the Club!'
[info] 1133115: Downloading 1 format(s): 1135013-hls-6819+1135013-hls-audio-aacl-256-English
[info] Writing video metadata as JSON to: D:\Temp\Funimation\K-On! - S1E1 - Disband the Club!.info.json
WARNING: ffmpeg-location D:\dummy does not exist! Continuing without ffmpeg.
WARNING: You have requested merging of multiple formats but ffmpeg is not installed. The formats won't be merged.
[debug] Invoking downloader on "https://vmfst-api.prd.funimationsvc.com/FunimationStoreFront/V1757083/26d6f23c-a90f-45c6-80e0-e2c01864b291strnv-hl154_streaming_video_1920_1080_7800000_index.m3u8?Key-Pair-Id=APKAIHNXECY27H4O6NIA&Policy=eyJTdGF0ZW1lbnQiOlt7IlJlc291cmNlIjoiaHR0cHM6Ly9kMzNldDc3ZXZkOWJnZy5jbG91ZGZyb250Lm5ldC9GdW5pbWF0aW9uU3RvcmVGcm9udC9WMTc1NzA4My8qIiwiQ29uZGl0aW9uIjp7IkRhdGVMZXNzVGhhbiI6eyJBV1M6RXBvY2hUaW1lIjoxNjQxNDY1MzI2fX19XX0_&Signature=i5t~O2TJZ~8XNKwFb~3huANBs5rUvs2nq2OqNsOHecNz4NkJKDJdj2sGC0zCLFu9~Kmu05wsgY-5xNChkwJ3BEM42lqiNdf~F1CJm4vJikyAVXSq--SHUHNjKXq5BWaGVMwWDd~1YHtBWlyoplYO9HnInG6~mIMMhAMGcTBkOBv1el9r2JcpI4V5CMPvCOA2TaDwKr9HeVTmHnVOOfApAfKfRR60CRsVVXgFBNdT6NGP6myy9ITdZzYinqcnggNiO2mza6jtotnokX0tOnrefthhkLikAcpzUnDZg0YC4Uj2AfTAxK~A6yGPvTp2~iR6yGayibhqFIq~-XiZK48KMw__&rt=1450032"
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 727
[download] Destination: D:\Temp\Funimation\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4
WARNING: The download speed shown is only of one thread. This is a known issue and patches are welcome
[download] D:\Temp\Funimation\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag17 has already been downloaded
[download] D:\Temp\Funimation\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag18 has already been downloaded
[download] 2.2% of ~1.12GiB at 21.40MiB/s ETA Unknown (frag 16/727)[download] D:\Temp\Funimation\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag19 has already been downloaded
[download] 2.3% of ~1.12GiB at 781.98MiB/s ETA Unknown (frag 17/727)[download] D:\Temp\Funimation\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag20 has already been downloaded
[download] D:\Temp\Funimation\K-On! - S1E1 - Disband the Club!.f1135013-hls-6819.mp4.part-Frag21 has already been downloaded
[download] 0.3% of ~39.60GiB at 11.05MiB/s ETA 56:42 (frag 20/727)
```
| null | https://github.com/yt-dlp/yt-dlp/pull/2901 | null | {'base_commit': '195c22840c594c8f9229cb47ffec2a8984c53a0c', 'files': [{'path': 'yt_dlp/downloader/fragment.py', 'status': 'modified', 'Loc': {"('FragmentFD', '_prepare_frag_download', 165)": {'mod': [181]}}}, {'path': 'yt_dlp/downloader/http.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18], 'mod': [8]}, "('HttpFD', 'real_download', 28)": {'add': [61]}, "('HttpFD', 'establish_connection', 89)": {'add': [93], 'mod': [102, 127, 128, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, 142, 143]}}}, {'path': 'yt_dlp/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5254]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/utils.py",
"yt_dlp/downloader/http.py",
"yt_dlp/downloader/fragment.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 1f6b90ed8db7006e2f2d539c41c8f3e59058dd00 | https://github.com/yt-dlp/yt-dlp/issues/4587 | good first issue
site-enhancement | 9gag.com - NineGagIE - InfoExtractor - add Uploader info to the returned metadata | ### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I'm running yt-dlp version **2022.07.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Example URLs
https://9gag.com/gag/a119eY2 (Anonymous Uploader)
https://9gag.com/gag/ajgp66G (Non-Anonymous Uploader)
### Provide a description that is worded well enough to be understood
9gag recently added the uploader of a post on the website.
I would like to know if you could add uploader informations to the return metadata if the uploader isn't anonymous.
That could be done with the already extracted JSON stored by the variable `post`.
Then it's a matter of getting the `creator = post.get('creator')`, if it is not `null` we can get :
`uploader = creator['fullName']`,
`uploader_id = creator['username']`,
`uploader_url = url_or_none(creator[('profileUrl'])`
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
No verbose it's just a request feature to add addiotional metadata to an already existing InfoExtractor.
```
| null | https://github.com/yt-dlp/yt-dlp/pull/4597 | null | {'base_commit': '1f6b90ed8db7006e2f2d539c41c8f3e59058dd00', 'files': [{'path': 'yt_dlp/extractor/ninegag.py', 'status': 'modified', 'Loc': {"('NineGagIE', None, 12)": {'add': [13, 23, 34], 'mod': [20, 25]}, "('NineGagIE', '_real_extract', 37)": {'add': [119], 'mod': [49, 101, 113, 117, 122, 123, 124]}, '(None, None, None)': {'mod': [6]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/ninegag.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 2b18a8c59018a863cfac5b959ee14e474a7a87bc | https://github.com/yt-dlp/yt-dlp/issues/417 | bug | [Broken] [YouTube] Can't get full Chat Replay when using cookies | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.06.09. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running yt-dlp version **2021.06.09**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKcj']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.06.09
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
PASTE VERBOSE LOG HERE
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
When I Download the chat replay with my cookies it doesn't start from the beginning instead it starts from a time a few minutes into the video this is usually really close to where i am in the Youtube player (few seconds to a minute)
When i don't use cookies it can get the chat replay from the start without any problem | null | https://github.com/yt-dlp/yt-dlp/pull/437 | null | {'base_commit': '2b18a8c59018a863cfac5b959ee14e474a7a87bc', 'files': [{'path': 'yt_dlp/downloader/youtube_live_chat.py', 'status': 'modified', 'Loc': {"('YoutubeLiveChatFD', 'real_download', 22)": {'add': [61, 144, 146], 'mod': [93, 94, 95, 96, 98, 157, 158, 159, 160, 161]}, "('YoutubeLiveChatFD', 'download_and_parse_fragment', 98)": {'mod': [105, 109]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/downloader/youtube_live_chat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | f6c73aad5f1a67544bea137ebd9d1e22e0e56567 | https://github.com/yt-dlp/yt-dlp/issues/9512 | site-bug | [Globo] Unable to download JSON metadata: HTTP Error 404: Not Found | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Brazil
### Provide a description that is worded well enough to be understood
```shell
yt-dlp --cookies-from-browser chrome -F https://globoplay.globo.com/v/12450434
```
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', '--cookies-from-browser', 'chrome', '-F', 'https://globoplay.globo.com/v/12450434']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (pip)
[debug] Python 3.12.2 (CPython x86_64 64bit) - macOS-14.2.1-x86_64-i386-64bit (OpenSSL 3.2.1 30 Jan 2024)
[debug] exe versions: ffmpeg 6.1.1 (setts), ffprobe 6.1.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.1.0, certifi-2024.02.02, mutagen-1.47.0, requests-2.31.0, sqlite3-3.45.2, urllib3-2.2.1, websockets-12.0
[debug] Proxy map: {}
Extracting cookies from chrome
[debug] Extracting cookies from: "/Users/USER/Library/Application Support/Google/Chrome/Default/Cookies"
[debug] using find-generic-password to obtain password from OSX keychain
Extracted 3210 cookies from chrome
[debug] cookie version breakdown: {'v10': 3254, 'other': 0, 'unencrypted': 51}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1803 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.03.10 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)
[Globo] Extracting URL: https://globoplay.globo.com/v/12450434
[Globo] 12450434: Getting cookies
[Globo] 12450434: Downloading JSON metadata
[Globo] 12450434: Downloading security hash for 12450434
ERROR: [Globo] 12450434: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: Not Found>)
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 732, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/globo.py", line 99, in _real_extract
security = self._download_json(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 1086, in download_content
res = getattr(self, download_handle.__name__)(url_or_request, video_id, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 1050, in download_handle
res = self._download_webpage_handle(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 920, in _download_webpage_handle
urlh = self._request_webpage(url_or_request, video_id, note, errnote, fatal, data=data, headers=headers, query=query, expected_status=expected_status)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 877, in _request_webpage
raise ExtractorError(errmsg, cause=err)
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/extractor/common.py", line 864, in _request_webpage
return self._downloader.urlopen(self._create_request(url_or_request, data, headers, query))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/YoutubeDL.py", line 4101, in urlopen
return self._request_director.send(req)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 115, in send
response = handler.send(request)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/networking/_helper.py", line 204, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/networking/common.py", line 326, in send
return self._send(request)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/Cellar/yt-dlp/2024.03.10/libexec/lib/python3.12/site-packages/yt_dlp/networking/_requests.py", line 351, in _send
raise HTTPError(res, redirect_loop=max_redirects_exceeded)
yt_dlp.networking.exceptions.HTTPError: HTTP Error 404: Not Found
```
| null | https://github.com/yt-dlp/yt-dlp/pull/11795 | null | {'base_commit': 'f6c73aad5f1a67544bea137ebd9d1e22e0e56567', 'files': [{'path': 'yt_dlp/extractor/globo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 11, 14, 15], 'mod': [1, 2, 4, 8, 10]}, "('GloboIE', None, 18)": {'add': [20], 'mod': [19, 22, 28, 29, 41, 42, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 65, 66, 68, 70, 71, 72, 73]}, "('GloboIE', '_real_extract', 80)": {'mod': [83, 84, 85, 87, 88, 89, 90, 91, 93, 96, 97, 98, 99, 103, 104, 105, 107, 109, 110, 111, 112, 113, 114, 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 128, 129, 130, 131, 132, 133, 134, 136, 137, 138, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156, 158, 159, 160, 164, 165, 166, 167]}, "('GloboArticleIE', None, 173)": {'mod': [174]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/globo.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 8c53322cda75394a8d551dde20b2529ee5ad6e89 | https://github.com/yt-dlp/yt-dlp/issues/5744 | site-enhancement
patch-available | [ok.ru] Download subtitle | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm requesting a site-specific feature
- [X] I've verified that I'm running yt-dlp version **2022.11.11** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Example URLs
https://ok.ru/video/4249587550747
### Provide a description that is worded well enough to be understood
Download subtitle from ok.ru
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['https://ok.ru/video/4249587550747', '--no-download', '--list-subs', '-vU']
[debug] User config "/home/nir/.config/yt-dlp/config": ['--no-overwrites', '--restrict-filenames', '--merge-output-format', 'mkv', '--paths', '~/Downloads/youtube_dl', '--output', '%(title)s_%(id)s_%(autonumber)d.%(ext)s']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.11.11 [8b644025b] (source)
[debug] Lazy loading extractors is disabled
[debug] Plugins: ['SamplePluginIE', 'SamplePluginPP']
[debug] Git HEAD: 935bac1e
[debug] Python 3.8.10 (CPython x86_64 64bit) - Linux-5.15.0-56-generic-x86_64-with-glibc2.29 (OpenSSL 1.1.1f 31 Mar 2020, glibc 2.31)
[debug] exe versions: ffmpeg 4.2.7, ffprobe 4.2.7
[debug] Optional libraries: certifi-2019.11.28, secretstorage-2.3.1, sqlite3-2.6.0
[debug] Proxy map: {}
[debug] Loaded 1731 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.11.11, Current version: 2022.11.11
yt-dlp is up to date (2022.11.11)
[Odnoklassniki] Extracting URL: https://ok.ru/video/4249587550747
[Odnoklassniki] 4249587550747: Downloading desktop webpage
[Odnoklassniki] 4249587550747: Downloading m3u8 information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
4249587550747 has no subtitles
```
| null | https://github.com/yt-dlp/yt-dlp/pull/5920 | null | {'base_commit': '8c53322cda75394a8d551dde20b2529ee5ad6e89', 'files': [{'path': 'yt_dlp/extractor/odnoklassniki.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [13]}, "('OdnoklassnikiIE', None, 21)": {'add': [155, 204]}, "('OdnoklassnikiIE', '_extract_desktop', 222)": {'add': [296, 307]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/odnoklassniki.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 5f2da312fa66d6f001ca4d8d79ee281b9b62e9ed | https://github.com/yt-dlp/yt-dlp/issues/840 | enhancement | UnicodeDecodeError when configuration saved as UTF-8 and OS default encoding is GBK | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.08.10. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Read bugs section in FAQ: https://github.com/yt-dlp/yt-dlp
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [x] I'm reporting a bug unrelated to a specific site
- [x] I've verified that I'm running yt-dlp version **2021.08.10**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] The provided URLs do not contain any DRM to the best of my knowledge
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar bug reports including closed ones
- [x] I've read bugs section in FAQ
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.08.10
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
Traceback (most recent call last):
File "yt_dlp\__main__.py", line 19, in <module>
File "yt_dlp\__init__.py", line 750, in main
File "yt_dlp\__init__.py", line 73, in _real_main
File "yt_dlp\options.py", line 1496, in parseOpts
File "yt_dlp\options.py", line 1476, in get_configs
File "yt_dlp\options.py", line 1471, in read_options
File "yt_dlp\options.py", line 60, in _readOptions
UnicodeDecodeError: 'gbk' codec can't decode byte 0xa8 in position 16: illegal multibyte sequence
[40668] Failed to execute script '__main__' due to unhandled exception!
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Please make sure the description is worded well enough to be understood, see https://github.com/ytdl-org/youtube-dl#is-the-description-of-the-issue-itself-sufficient. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
I'm a Chinese user and I tried to write comments in my native language in the configuration file `yt-dlp.conf`, putting the file along with the executable:
```bash
# 代理服务器 (which means Proxy Server)
--proxy 127.0.0.1:29970
```
Then I saved the file as UTF-8. While using yt-dlp with or without any arguments, it reports `UnicodeDecodeError`.
Because I'm using Chinese as my display language, the default encoding of my system is GBK. It seems that yt-dlp tries to decode the configuration file as GBK, regardless of the actual encoding.
Changing the code page to 65001 (UTF-8) with `chcp 65001` in `cmd` doesn't work. Removing CJK characters or changing the file encoding into GBK solves the problem, but I think saving the file as UTF-8 might be more reasonable.
| null | https://github.com/yt-dlp/yt-dlp/pull/4357 | null | {'base_commit': '5f2da312fa66d6f001ca4d8d79ee281b9b62e9ed', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1163]}}}, {'path': 'test/test_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [41, 1824]}}}, {'path': 'yt_dlp/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3487, 5396]}, "('Config', 'read_file', 5446)": {'add': [5450], 'mod': [5448, 5453]}, "(None, 'is_html', 3488)": {'mod': [3491, 3492, 3493, 3494, 3495, 3496, 3497]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/utils.py"
],
"doc": [
"README.md"
],
"test": [
"test/test_utils.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 2fd226f6a76715e429709d7172183d48e07c7ab3 | https://github.com/yt-dlp/yt-dlp/issues/544 | bug | Program not running without `_sqlite3` module | ## Checklist
- [ ] I'm reporting a broken site support issue
- [x] I've verified that I'm running yt-dlp version **2021.07.21**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar bug reports including closed ones
- [x] I've read bugs section in FAQ
## Verbose log
```
$ yt-dlp --verbose --version
Traceback (most recent call last):
File "/home/me/.local/lib/python3.9/runpy.py", line 188, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/home/me/.local/lib/python3.9/runpy.py", line 147, in _get_module_details
return _get_module_details(pkg_main_name, error)
File "/home/me/.local/lib/python3.9/runpy.py", line 111, in _get_module_details
__import__(pkg_name)
File "/home/me/.local/lib/python3.9/site-packages/yt_dlp/__init__.py", line 16, in <module>
from .options import (
File "/home/me/.local/lib/python3.9/site-packages/yt_dlp/options.py", line 22, in <module>
from .cookies import SUPPORTED_BROWSERS
File "/home/me/.local/lib/python3.9/site-packages/yt_dlp/cookies.py", line 5, in <module>
import sqlite3
File "/home/me/.local/lib/python3.9/sqlite3/__init__.py", line 23, in <module>
from sqlite3.dbapi2 import *
File "/home/me/.local/lib/python3.9/sqlite3/dbapi2.py", line 27, in <module>
from _sqlite3 import *
ModuleNotFoundError: No module named '_sqlite3'
```
## Description
The `_sqlite3` Python module seems to be required since version `2021.07.21`.
Can we make the program work without that module? It is of course OK that some functions are disabled in that situation. | null | https://github.com/yt-dlp/yt-dlp/pull/554 | null | {'base_commit': '2fd226f6a76715e429709d7172183d48e07c7ab3', 'files': [{'path': 'yt_dlp/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25], 'mod': [5]}, "(None, '_extract_firefox_cookies', 91)": {'add': [92]}, "(None, '_extract_chrome_cookies', 196)": {'add': [197]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/cookies.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | f14c2333481c63c24017a41ded7d8f36726504b7 | https://github.com/yt-dlp/yt-dlp/issues/3005 | site-bug | Can't extract from sportdeutschland.tv | ### Checklist
- [X] I'm reporting a site feature request
- [X] I've verified that I'm running yt-dlp version **2022.03.08.1**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany
### Example URLs
https://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0
### Description
Can't extract from this link:
https://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0
### Verbose log
```shell
[debug] Command-line config: ['-vU', 'https://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0', '--verbose']
[debug] Encodings: locale cp1252, fs utf-8, out utf-8 (No ANSI), err utf-8 (No ANSI), pref cp1252
[debug] yt-dlp version 2022.03.08.1 [c0c2c57] (win_exe)
[debug] Python version 3.8.10 (CPython 64bit) - Windows-7-6.1.7601-SP1
[debug] exe versions: ffmpeg 2022-01-10-git-f37e66b393-full_build-www.gyan.dev (setts), ffprobe 2022-01-10-git-f37e66b393-full_build-www.gyan.dev
[debug] Optional libraries: brotli, Cryptodome, mutagen, sqlite, websockets
[debug] Proxy map: {}
Latest version: 2022.03.08.1, Current version: 2022.03.08.1
yt-dlp is up to date (2022.03.08.1)
[debug] [SportDeutschland] Extracting URL: https://sportdeutschland.tv/deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0
[SportDeutschland] deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0: Downloading JSON metadata
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 735, in _request_webpage
File "yt_dlp\YoutubeDL.py", line 3591, in urlopen
File "urllib\request.py", line 531, in open
File "urllib\request.py", line 640, in http_response
File "urllib\request.py", line 569, in error
File "urllib\request.py", line 502, in _call_chain
File "urllib\request.py", line 649, in http_error_default
urllib.error.HTTPError: HTTP Error 404: Not Found
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\extractor\common.py", line 617, in extract
File "yt_dlp\extractor\sportdeutschland.py", line 47, in _real_extract
File "yt_dlp\extractor\common.py", line 997, in _download_json
File "yt_dlp\extractor\common.py", line 976, in _download_json_handle
File "yt_dlp\extractor\common.py", line 768, in _download_webpage_handle
File "yt_dlp\extractor\common.py", line 753, in _request_webpage
yt_dlp.utils.ExtractorError: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the "Broken site" issue template
properly. Confirm you are on the latest version using yt-dlp -U
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1389, in wrapper
File "yt_dlp\YoutubeDL.py", line 1459, in __extract_info
File "yt_dlp\extractor\common.py", line 643, in extract
yt_dlp.utils.ExtractorError: [SportDeutschland] deutscherbadmintonverband/bwf-tour-1-runde-feld-1-yonex-gainward-german-open-2022-0: Unable to download JSON metadata: HTTP Error 404: Not Found (caused by <HTTPError 404: 'Not Found'>); please report this issue on https://github.com/yt-dlp/yt-dlp , filling out the "Broken site" issue template properly. Confirm you are on the latest version using yt-dlp -U
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "yt_dlp\__main__.py", line 19, in <module>
File "yt_dlp\__init__.py", line 864, in main
File "yt_dlp\__init__.py", line 854, in _real_main
File "yt_dlp\YoutubeDL.py", line 3254, in download
File "yt_dlp\YoutubeDL.py", line 3227, in wrapper
File "yt_dlp\YoutubeDL.py", line 1380, in extract_info
File "yt_dlp\YoutubeDL.py", line 1407, in wrapper
File "yt_dlp\utils.py", line 1088, in format_traceback
TypeError: format_exception() missing 2 required positional arguments: 'value' and 'tb'
[31380] Failed to execute script '__main__' due to unhandled exception!
```
| null | https://github.com/yt-dlp/yt-dlp/pull/6041 | null | {'base_commit': 'f14c2333481c63c24017a41ded7d8f36726504b7', 'files': [{'path': 'yt_dlp/extractor/sportdeutschland.py', 'status': 'modified', 'Loc': {"('SportDeutschlandIE', '_real_extract', 42)": {'add': [95], 'mod': [44, 45, 47, 48, 49, 52, 53, 54, 56, 58, 59, 60, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94]}, '(None, None, None)': {'mod': [3, 4, 5, 6, 7, 8, 9]}, "('SportDeutschlandIE', None, 13)": {'mod': [16, 18, 20, 21, 22, 23, 24, 25, 26, 27, 29, 31, 32, 33, 34, 35, 36, 37, 38, 39]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/sportdeutschland.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4 | https://github.com/yt-dlp/yt-dlp/issues/9652 | DRM
site-bug
patch-available | on.orf.at not complete DRM detection | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
### Checklist
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Austria
### Provide a description that is worded well enough to be understood
I found a video that is DRM Protected but the `-F` parameter reports available formats to download:
I added the `--allow-unplayable-formats` for better understanding, what is marked as DRM and what not.
All should be marked but some aren't
_See Complete Verbose Output_
all of them are DRM protected what can be found out by `--check-formats`
```
/tmp 3.9s [1] nix run -- nixpkgs#yt-dlp "https://on.orf.at/video/14217002/dsf" --check-formats -vU
[debug] Command-line config: ['https://on.orf.at/video/14217002/dsf', '--check-formats', '-vU']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (pip)
[debug] Python 3.11.8 (CPython x86_64 64bit) - Linux-6.1.77-x86_64-with-glibc2.38 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.38)
[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.18.0, brotlicffi-1.1.0.0, certifi-2023.07.22, mutagen-1.47.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.43.2, urllib3-2.0.7, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1803 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.03.10 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)
[orf:on] Extracting URL: https://on.orf.at/video/14217002/dsf
[orf:on] dsf: Downloading webpage
[orf:on] dsf: Downloading JSON metadata
[orf:on] dsf: Downloading m3u8 information
[orf:on] dsf: Downloading m3u8 information
[orf:on] dsf: Downloading MPD manifest
[orf:on] dsf: Downloading MPD manifest
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] Testing format hls-3192-1
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-3192-1. Skipping...
[info] Testing format hls-3192-0
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-3192-0. Skipping...
[info] Testing format hls-1992-1
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-1992-1. Skipping...
[info] Testing format hls-1992-0
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-1992-0. Skipping...
[info] Testing format hls-992-1
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-992-1. Skipping...
[info] Testing format hls-992-0
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-992-0. Skipping...
[info] Testing format dash-p0aa0br192000-1
[dashsegments] Total fragments: 1
[download] Destination: /tmp/tmp973tt42o.tmp
[download] 100% of 651.00B in 00:00:00 at 9.21KiB/s
[info] Testing format hls-3192-1
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-3192-1. Skipping...
[info] Testing format hls-3192-0
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-3192-0. Skipping...
[info] Testing format hls-1992-1
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-1992-1. Skipping...
[info] Testing format hls-1992-0
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-1992-0. Skipping...
[info] Testing format hls-992-1
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-992-1. Skipping...
[info] Testing format hls-992-0
[hlsnative] Downloading m3u8 manifest
ERROR: This format is DRM protected; Try selecting another format with --format or add --check-formats to automatically fallback to the next best format
[info] Unable to download format hls-992-0. Skipping...
ERROR: [orf:on] 14217002: Requested format is not available. Use --list-formats for a list of available formats
Traceback (most recent call last):
File "/nix/store/rmfvh66k2rr05djcqxx61v59wr569xmb-python3.11-yt-dlp-2024.3.10/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1594, in wrapper
return func(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/rmfvh66k2rr05djcqxx61v59wr569xmb-python3.11-yt-dlp-2024.3.10/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1750, in __extract_info
return self.process_ie_result(ie_result, download, extra_info)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/rmfvh66k2rr05djcqxx61v59wr569xmb-python3.11-yt-dlp-2024.3.10/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 1809, in process_ie_result
ie_result = self.process_video_result(ie_result, download=download)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nix/store/rmfvh66k2rr05djcqxx61v59wr569xmb-python3.11-yt-dlp-2024.3.10/lib/python3.11/site-packages/yt_dlp/YoutubeDL.py", line 2930, in process_video_result
raise ExtractorError(
yt_dlp.utils.ExtractorError: [orf:on] 14217002: Requested format is not available. Use --list-formats for a list of available formats
```
I created a openapi description for the new `v4.3` api that can be found here:
https://gist.github.com/TuxCoder/6987f49e01d8ef826037cb99afdcc1b2
The interessting part is the public api
https://gist.github.com/TuxCoder/6987f49e01d8ef826037cb99afdcc1b2#file-openapiv3-yaml-L626
with the content of an `Episode`
https://gist.github.com/TuxCoder/6987f49e01d8ef826037cb99afdcc1b2#file-openapiv3-yaml-L813
there is a field calld `is_drm_protected` what should be reliable
Edit:
There is also the same field for each `Source`
https://gist.github.com/TuxCoder/6987f49e01d8ef826037cb99afdcc1b2#file-openapiv3-yaml-L674
The Json in question can be fetched here:
https://api-tvthek.orf.at/api/v4.3/public/episode/encrypted/M2RTbGZlazAzbnNMS2RqNEpzZDE0MjE3MDAy
EndEdit
I tested this with a small patch
```patch
diff --git a/yt_dlp/extractor/orf.py b/yt_dlp/extractor/orf.py
index 526e9acaf..4ff4cf90c 100644
--- a/yt_dlp/extractor/orf.py
+++ b/yt_dlp/extractor/orf.py
@@ -590,6 +590,9 @@ def _extract_video(self, video_id, display_id):
api_json = self._download_json(
f'https://api-tvthek.orf.at/api/v4.3/public/episode/encrypted/{encrypted_id}', display_id)
+
+ has_drm = traverse_obj(api_json, 'is_drm_protected', {bool})
+
formats, subtitles = [], {}
for manifest_type in traverse_obj(api_json, ('sources', {dict.keys}, ...)):
for manifest_url in traverse_obj(api_json, ('sources', manifest_type, ..., 'src', {url_or_none})):
@@ -601,6 +604,8 @@ def _extract_video(self, video_id, display_id):
manifest_url, display_id, fatal=False, mpd_id='dash')
else:
continue
+ for fmt in fmts:
+ fmt['has_drm'] = has_drm
formats.extend(fmts)
self._merge_subtitles(subs, target=subtitles)
```
what looks like to fix the problem:
Now all are formats are shown as DRM protected
```
[~/projects/yt-dlp]$ python3 yt_dlp/__main__.py "https://on.orf.at/video/14217002/dsf" -F --allow-unplayable-formats -vU
[debug] Command-line config: ['https://on.orf.at/video/14217002/dsf', '-F', '--allow-unplayable-formats', '-vU']
WARNING: You have asked for UNPLAYABLE formats to be listed/downloaded. This is a developer option intended for debugging.
If you experience any issues while using this option, DO NOT open a bug report
[debug] Encodings: locale utf-8, fs utf-8, pref utf-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (source)
[debug] Lazy loading extractors is disabled
[debug] Git HEAD: 79a451e57
[debug] Python 3.11.8 (CPython x86_64 64bit) - Linux-6.1.77-x86_64-with-glibc2.38 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.38)
[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0
[debug] Optional libraries: Cryptodome-3.18.0, brotlicffi-1.1.0.0, certifi-2023.07.22, mutagen-1.47.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.43.2, urllib3-2.0.7, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1810 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.03.10 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)
[orf:on] Extracting URL: https://on.orf.at/video/14217002/dsf
[orf:on] dsf: Downloading webpage
[orf:on] dsf: Downloading JSON metadata
[orf:on] dsf: Downloading m3u8 information
[orf:on] dsf: Downloading m3u8 information
[orf:on] dsf: Downloading MPD manifest
[orf:on] dsf: Downloading MPD manifest
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] Available formats for 14217002:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR ASR MORE INFO
─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
hls-audio-Deutsch-0 m3u8 audio only │ m3u8 │ audio only unknown [de] DRM, Deutsch
hls-audio-Deutsch-1 m3u8 audio only │ m3u8 │ audio only unknown [de] DRM, Deutsch
dash-p0aa0br192000-0 m4a audio only │ ~ 54.70MiB 192k dash │ audio only mp4a.40.2 192k 48k [de] DRM, DASH audio, m4a_dash
dash-p0aa0br192000-1 m4a audio only │ ~ 54.70MiB 192k dash │ audio only mp4a.40.2 192k 48k [de] DRM, DASH audio, m4a_dash
hls-992-0 mp4 640x360 │ ~282.63MiB 992k m3u8 │ unknown unknown DRM
hls-992-1 mp4 640x360 │ ~282.63MiB 992k m3u8 │ unknown unknown DRM
dash-p0va0br801596-0 mp4 640x360 25 │ ~228.38MiB 802k dash │ avc1.64001e 802k video only DRM, DASH video, mp4_dash
dash-p0va0br801596-1 mp4 640x360 25 │ ~228.38MiB 802k dash │ avc1.64001e 802k video only DRM, DASH video, mp4_dash
hls-1992-0 mp4 960x540 │ ~567.54MiB 1992k m3u8 │ unknown unknown DRM
hls-1992-1 mp4 960x540 │ ~567.54MiB 1992k m3u8 │ unknown unknown DRM
dash-p0va0br1801680-0 mp4 960x540 25 │ ~513.32MiB 1802k dash │ avc1.64001f 1802k video only DRM, DASH video, mp4_dash
dash-p0va0br1801680-1 mp4 960x540 25 │ ~513.32MiB 1802k dash │ avc1.64001f 1802k video only DRM, DASH video, mp4_dash
hls-3192-0 mp4 1280x720 │ ~909.43MiB 3192k m3u8 │ unknown unknown DRM
hls-3192-1 mp4 1280x720 │ ~909.43MiB 3192k m3u8 │ unknown unknown DRM
dash-p0va0br3001976-0 mp4 1280x720 25 │ ~855.29MiB 3002k dash │ avc1.64001f 3002k video only DRM, DASH video, mp4_dash
dash-p0va0br3001976-1 mp4 1280x720 25 │ ~855.29MiB 3002k dash │ avc1.64001f 3002k video only DRM, DASH video, mp4_dash
```
but I'm not sure its the right place
Also thanks to all Maintainer / Contributor, this is a awesome tool.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
/tmp 1.6s ❱ nix run -- nixpkgs#yt-dlp "https://on.orf.at/video/14217002/dsf" --allow-unplayable-formats -F -vU
[debug] Command-line config: ['https://on.orf.at/video/14217002/dsf', '--allow-unplayable-formats', '-F', '-vU']
WARNING: You have asked for UNPLAYABLE formats to be listed/downloaded. This is a developer option intended for debugging.
If you experience any issues while using this option, DO NOT open a bug report
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (pip)
[debug] Python 3.11.8 (CPython x86_64 64bit) - Linux-6.1.77-x86_64-with-glibc2.38 (OpenSSL 3.0.13 30 Jan 2024, glibc 2.38)
[debug] exe versions: ffmpeg 6.0 (setts), ffprobe 6.0, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.18.0, brotlicffi-1.1.0.0, certifi-2023.07.22, mutagen-1.47.0, requests-2.31.0, secretstorage-3.3.3, sqlite3-3.43.2, urllib3-2.0.7, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests
[debug] Loaded 1803 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.03.10 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)
[orf:on] Extracting URL: https://on.orf.at/video/14217002/dsf
[orf:on] dsf: Downloading webpage
[orf:on] dsf: Downloading JSON metadata
[orf:on] dsf: Downloading m3u8 information
[orf:on] dsf: Downloading m3u8 information
[orf:on] dsf: Downloading MPD manifest
[orf:on] dsf: Downloading MPD manifest
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[info] Available formats for 14217002:
ID EXT RESOLUTION FPS │ FILESIZE TBR PROTO │ VCODEC VBR ACODEC ABR ASR MORE INFO
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
hls-audio-Deutsch-0 m3u8 audio only │ m3u8 │ audio only unknown [de] Deutsch
hls-audio-Deutsch-1 m3u8 audio only │ m3u8 │ audio only unknown [de] Deutsch
dash-p0aa0br192000-0 m4a audio only │ ~ 56.02MiB 192k dash │ audio only mp4a.40.2 192k 48k [de] DASH audio, m4a_dash
dash-p0aa0br192000-1 m4a audio only │ ~ 56.02MiB 192k dash │ audio only mp4a.40.2 192k 48k [de] DASH audio, m4a_dash
hls-992-0 mp4 640x360 │ ~289.41MiB 992k m3u8 │ unknown unknown
hls-992-1 mp4 640x360 │ ~289.41MiB 992k m3u8 │ unknown unknown
dash-p0va0br801596-0 mp4 640x360 25 │ ~233.86MiB 802k dash │ avc1.64001e 802k video only DRM, DASH video, mp4_dash
dash-p0va0br801596-1 mp4 640x360 25 │ ~233.86MiB 802k dash │ avc1.64001e 802k video only DRM, DASH video, mp4_dash
hls-1992-0 mp4 960x540 │ ~581.16MiB 1992k m3u8 │ unknown unknown
hls-1992-1 mp4 960x540 │ ~581.16MiB 1992k m3u8 │ unknown unknown
dash-p0va0br1801680-0 mp4 960x540 25 │ ~525.64MiB 1802k dash │ avc1.64001f 1802k video only DRM, DASH video, mp4_dash
dash-p0va0br1801680-1 mp4 960x540 25 │ ~525.64MiB 1802k dash │ avc1.64001f 1802k video only DRM, DASH video, mp4_dash
hls-3192-0 mp4 1280x720 │ ~931.26MiB 3192k m3u8 │ unknown unknown
hls-3192-1 mp4 1280x720 │ ~931.26MiB 3192k m3u8 │ unknown unknown
dash-p0va0br3001976-0 mp4 1280x720 25 │ ~875.82MiB 3002k dash │ avc1.64001f 3002k video only DRM, DASH video, mp4_dash
dash-p0va0br3001976-1 mp4 1280x720 25 │ ~875.82MiB 3002k dash │ avc1.64001f 3002k video only DRM, DASH video, mp4_dash
```
| null | https://github.com/yt-dlp/yt-dlp/pull/9677 | null | {'base_commit': '3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4', 'files': [{'path': 'yt_dlp/extractor/orf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16]}, "('ORFONIE', None, 570)": {'add': [585], 'mod': [572, 588]}, "('ORFONIE', '_extract_video', 588)": {'add': [606, 611], 'mod': [591, 598, 601]}, "('ORFONIE', '_real_extract', 619)": {'mod': [620, 621, 628, 629]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/orf.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 2314b4d89fc111ddfcb25937210f1f1c2390cc4a | https://github.com/yt-dlp/yt-dlp/issues/4776 | bug | `InfoExtractor._get_cookies` fails if values contain quotes | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.08.19** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
`InfoExtractor._get_cookies` uses `http.cookies.SimpleCookie` to process the cookies. Analogue to #4692 the parsing will fail fast instead of skipping the invalid values.
SimpleCookie allows values with quotes if set explicitly.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--cookies-from-browser', 'firefox', '-j', 'https://beta.crunchyroll.com/de/watch/GG1U2Q50J/the-former-couple-refuses-to-say', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.08.19 [48c88e0] (pip)
[debug] Python 3.10.5 (CPython 64bit) - Windows-10-10.0.22000-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg n5.1-10-g6ee1996721-20220822 (setts), ffprobe n5.1-10-g6ee1996721-20220822
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[Cookies] Extracting cookies from firefox
[debug] Extracting cookies from: "C:\Users\grub4k\AppData\Roaming\Mozilla\Firefox\Profiles\kbiex092.default-release\cookies.sqlite"
[Cookies] Extracted 790 cookies from firefox
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.08.19, Current version: 2022.08.19
yt-dlp is up to date (2022.08.19)
[debug] [crunchyroll:beta] Extracting URL: https://beta.crunchyroll.com/de/watch/GG1U2Q50J/the-former-couple-refuses-to-say
[crunchyroll:beta] the-former-couple-refuses-to-say: Downloading webpage
ERROR: GG1U2Q50J: An extractor error has occurred. (caused by KeyError('byId')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "C:\Users\grub4k\AppData\Local\pypoetry\Cache\virtualenvs\crunchyload-sk9Pq0JJ-py3.10\lib\site-packages\yt_dlp\extractor\common.py", line 666, in extract
ie_result = self._real_extract(url)
File "C:\Users\grub4k\AppData\Local\pypoetry\Cache\virtualenvs\crunchyload-sk9Pq0JJ-py3.10\lib\site-packages\yt_dlp\extractor\crunchyroll.py", line 805, in _real_extract
return self._redirect_from_beta(url, lang, internal_id, display_id, True, CrunchyrollIE.ie_key())
File "C:\Users\grub4k\AppData\Local\pypoetry\Cache\virtualenvs\crunchyload-sk9Pq0JJ-py3.10\lib\site-packages\yt_dlp\extractor\crunchyroll.py", line 752, in _redirect_from_beta
content_data = initial_state['content']['byId'][internal_id]
KeyError: 'byId'
```
| null | https://github.com/yt-dlp/yt-dlp/pull/4780 | null | {'base_commit': '2314b4d89fc111ddfcb25937210f1f1c2390cc4a', 'files': [{'path': 'test/test_cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "('TestCookies', 'test_pbkdf2_sha1', 137)": {'add': [139]}}}, {'path': 'yt_dlp/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "(None, '_parse_browser_specification', 985)": {'add': [991]}}}, {'path': 'yt_dlp/extractor/common.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [23]}, "('InfoExtractor', '_get_cookies', 3633)": {'mod': [3634]}}}]} | [] | [] | [] | {
"iss_type": "2有点犹豫,出错了但是该报错是用于验证某个问题。",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/cookies.py",
"yt_dlp/extractor/common.py"
],
"doc": [],
"test": [
"test/test_cookies.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | a79cba0c95b8b74d2ca4f7fbf6ffe76e34ed7221 | https://github.com/yt-dlp/yt-dlp/issues/2840 | site-request | Site support request for: ixigua.com | ### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running yt-dlp version **2022.02.04**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/ytdl-org/youtube-dl#can-you-add-support-for-this-anime-video-site-or-site-which-shows-current-movies-for-free) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [x] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
Thailand, Mainland China, probably worldwide.
### Example URLs
https://www.ixigua.com/6996881461559165471
https://www.ixigua.com/6901922393657180679?id=6963688388327113255&logTag=c159ae59d579c199c066
### Description
Xigua Video (https://www.ixigua.com/) is an online video-sharing platform owned by ByteDance. As of June 2020, the platform has 131 million monthly active users.
### Verbose log
```shell
[debug] Command-line config: ['-vU', 'https://www.ixigua.com/6996881461559165471']
[debug] Encodings: locale cp874, fs utf-8, out utf-8, err utf-8, pref cp874
[debug] yt-dlp version 2022.02.04 [c1653e9ef]
[debug] Python version 3.7.2 (CPython 64bit) - Windows-10-10.0.18362-SP0
[debug] exe versions: ffmpeg 2021-10-28-git-e84c83ef98-full_build-www.gyan.dev (setts), ffprobe 2021-10-28-git-e84c83ef98-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets
[debug] Proxy map: {}
Latest version: 2022.02.04, Current version: 2022.02.04
yt-dlp is up to date (2022.02.04)
[debug] [generic] Extracting URL: https://www.ixigua.com/6996881461559165471
[generic] 6996881461559165471: Requesting header
WARNING: [generic] Falling back on generic information extractor.
[generic] 6996881461559165471: Downloading webpage
[generic] 6996881461559165471: Extracting information
[debug] Looking for video embeds
ERROR: Unsupported URL: https://www.ixigua.com/6996881461559165471
Traceback (most recent call last):
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\yt_dlp\YoutubeDL.py", line 1381, in wrapper
return func(self, *args, **kwargs)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\yt_dlp\YoutubeDL.py", line 1451, in __extract_info
ie_result = ie.extract(url)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\yt_dlp\extractor\common.py", line 612, in extract
ie_result = self._real_extract(url)
File "C:\Users\User\AppData\Local\Programs\Python\Python37\lib\site-packages\yt_dlp\extractor\generic.py", line 3986, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.ixigua.com/6996881461559165471
C:\Users\User>
```
| null | https://github.com/yt-dlp/yt-dlp/pull/3953 | null | {'base_commit': 'a79cba0c95b8b74d2ca4f7fbf6ffe76e34ed7221', 'files': [{'path': 'yt_dlp/extractor/_extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [722]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/_extractors.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 700444c23ddb65f618c2abd942acdc0c58c650b1 | https://github.com/yt-dlp/yt-dlp/issues/3355 | bug
patch-available
regression | problem with double-dot segments (`/../`) after the hostname | ### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.04.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Description
Some URLs have a double-dot section after the hostname, which causes problems in yt-dlp.
Example: https://streamwo.com/v/gp445h2f
if we resolve this URL we get this:
```
$ yt-dlp --get-url https://streamwo.com/v/gp445h2f
https://reoa92d.com/../uploaded/1649416469.mp4#t=0.1
```
Which has a `../` segment right after the hostname.
Opening this result in browsers, or downloading it using curl is no problem:
```
$ curl -O https://reoa92d.com/../uploaded/1649416469.mp4
...
Succeeds
```
But yt-dlp fails:
```
$ yt-dlp https://streamwo.com/v/gp445h2f
[generic] gp445h2f: Requesting header
WARNING: [generic] Falling back on generic information extractor.
[generic] gp445h2f: Downloading webpage
[generic] gp445h2f: Extracting information
[download] Downloading playlist: Streamwo
[generic] playlist Streamwo: Collected 1 videos; downloading 1 of them
[download] Downloading video 1 of 1
[info] gp445h2f: Downloading 1 format(s): 0
ERROR: unable to download video data: HTTP Error 400: Bad Request
[download] Finished downloading playlist: Streamwo
```
mpv (which uses yt-dlp in it's ytdl_hook) fails as well:
```
$ mpv https://streamwo.com/v/gp445h2f
[ffmpeg] https: HTTP error 400 Bad Request
Failed to open https://reoa92d.com/../uploaded/1649416469.mp4#t=0.1.
Exiting... (Errors when loading file)
```
### Verbose log
```shell
$ yt-dlp -vU https://streamwo.com/v/gp445h2f
[debug] Command-line config: ['-vU', 'https://streamwo.com/v/gp445h2f']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8
[debug] yt-dlp version 2022.04.08 [7884ade65] (zip)
[debug] Python version 3.10.4 (CPython 64bit) - Linux-5.15.32-1-lts-x86_64-with-glibc2.35
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] exe versions: ffmpeg 5.0 (setts), ffprobe 5.0, phantomjs 2.1.1, rtmpdump 2.4
[debug] Optional libraries: mutagen, sqlite, websockets
[debug] Proxy map: {}
Latest version: 2022.04.08, Current version: 2022.04.08
yt-dlp is up to date (2022.04.08)
[debug] [generic] Extracting URL: https://streamwo.com/v/gp445h2f
[generic] gp445h2f: Requesting header
WARNING: [generic] Falling back on generic information extractor.
[generic] gp445h2f: Downloading webpage
[generic] gp445h2f: Extracting information
[debug] Looking for video embeds
[debug] Identified a HTML5 media
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[download] Downloading playlist: Streamwo
[generic] playlist Streamwo: Collected 1 videos; downloading 1 of them
[download] Downloading video 1 of 1
[debug] Default format spec: bestvideo*+bestaudio/best
[info] gp445h2f: Downloading 1 format(s): 0
[debug] Invoking downloader on "https://reoa92d.com/../uploaded/1649416469.mp4#t=0.1"
ERROR: unable to download video data: HTTP Error 400: Bad Request
Traceback (most recent call last):
File "/home/koonix/./yt-dlp/yt_dlp/YoutubeDL.py", line 3138, in process_info
success, real_download = self.dl(temp_filename, info_dict)
File "/home/koonix/./yt-dlp/yt_dlp/YoutubeDL.py", line 2846, in dl
return fd.download(name, new_info, subtitle)
File "/home/koonix/./yt-dlp/yt_dlp/downloader/common.py", line 457, in download
ret = self.real_download(filename, info_dict)
File "/home/koonix/./yt-dlp/yt_dlp/downloader/http.py", line 369, in real_download
establish_connection()
File "/home/koonix/./yt-dlp/yt_dlp/downloader/http.py", line 128, in establish_connection
ctx.data = self.ydl.urlopen(request)
File "/home/koonix/./yt-dlp/yt_dlp/YoutubeDL.py", line 3601, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/lib/python3.10/urllib/request.py", line 525, in open
response = meth(req, response)
File "/usr/lib/python3.10/urllib/request.py", line 634, in http_response
response = self.parent.error(
File "/usr/lib/python3.10/urllib/request.py", line 563, in error
return self._call_chain(*args)
File "/usr/lib/python3.10/urllib/request.py", line 496, in _call_chain
result = func(*args)
File "/usr/lib/python3.10/urllib/request.py", line 643, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 400: Bad Request
[download] Finished downloading playlist: Streamwo
```
| null | https://github.com/yt-dlp/yt-dlp/pull/7662 | null | {'base_commit': '25b6e8f94679b4458550702b46e61249b875a4fd', 'files': [{'path': 'test/test_networking.py', 'status': 'modified', 'Loc': {"('HTTPTestRequestHandler', 'do_GET', 142)": {'add': [175]}, "('TestHTTPRequestHandler', None, 316)": {'add': [357]}}}, {'path': 'test/test_utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [50, 51, 135]}, "('TestUtil', None, 138)": {'mod': [936]}, "('TestUtil', 'test_escape_url', 936)": {'mod': [938, 942, 946, 950, 953]}}}, {'path': 'yt_dlp/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [44], 'mod': [36]}, "('YoutubeDLCookieJar', 'get_cookie_header', 1309)": {'mod': [1311]}, "('YoutubeDLCookieJar', 'get_cookies_for_url', 1315)": {'mod': [1320]}}}, {'path': 'yt_dlp/networking/_urllib.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [44]}, "('HTTPHandler', 'http_request', 172)": {'mod': [182]}, "('HTTPHandler', 'http_response', 190)": {'mod': [215]}}}, {'path': 'yt_dlp/networking/common.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [29, 32]}, "('Request', 'url', 366)": {'mod': [371]}}}, {'path': 'yt_dlp/utils/_legacy.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "(None, 'sanitized_Request', 199)": {'mod': [200]}}}, {'path': 'yt_dlp/utils/_utils.py', 'status': 'modified', 'Loc': {"(None, 'escape_rfc3986', 2467)": {'mod': [2467, 2468, 2469, 2472, 2473, 2474, 2475, 2476, 2477, 2478, 2479, 2480, 2481]}}}, {'path': 'yt_dlp/utils/networking.py', 'status': 'modified', 'Loc': {"(None, 'clean_headers', 114)": {'add': [117]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/utils/_legacy.py",
"yt_dlp/networking/_urllib.py",
"yt_dlp/utils/_utils.py",
"yt_dlp/utils/networking.py",
"yt_dlp/cookies.py",
"yt_dlp/networking/common.py"
],
"doc": [],
"test": [
"test/test_utils.py",
"test/test_networking.py"
],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 4b5eec0aaa7c02627f27a386591b735b90e681a8 | https://github.com/yt-dlp/yt-dlp/issues/11641 | site-bug
patch-available | [TikTok] ERROR: Postprocessing: Conversion failed! when embedding thumbnail | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
Mainly the error "conversion failed", post processor errors. Lots of videos don't download. Also errors that have to do with "skipping unsupported chunk: ANMF" and "Nothing was written into output file, because at least one of its streams received no packets. Conversion failed!"
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['https://www.tiktok.com/@cooperspamsasf/video/7432045283686632710', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] [TikTok] Found universal data for rehydration
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[debug] Invoking http downloader on "https://v19-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-pve-0068/oAXOvcjeEAZzgjgfgQLKR5SGzeNrxA9ICICxHI/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=2404&bt=1202&cs=2&ds=4&ft=4fUEKMk88Zmo0WRLZb4jVaThrpWrKsd.&mime_type=video_mp4&qs=15&rc=NzNpZWU8OzRmNzs0Nzk1aUBpam93dnY5cnh4djMzNzczM0AtMS0uNS41NTIxMTBhXzEyYSNmZW9uMmRjbGVgLS1kMTZzcw%3D%3D&btag=e00088000&expire=1732609535&l=2024112602251903ACD4E62348E641B01E&ply_type=2&policy=2&signature=1e746658933c8ee3a81756c4afee15d3&tk=tt_chain_token"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i "file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].webp" -update 1 -movflags +faststart "file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].png"
[debug] ffmpeg version 7.0.2-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers
built with gcc 13.2.0 (Rev5, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 59. 8.100 / 59. 8.100
libavcodec 61. 3.100 / 61. 3.100
libavformat 61. 1.100 / 61. 1.100
libavdevice 61. 1.100 / 61. 1.100
libavfilter 10. 1.100 / 10. 1.100
libswscale 8. 1.100 / 8. 1.100
libswresample 5. 1.100 / 5. 1.100
libpostproc 58. 1.100 / 58. 1.100
[webp @ 000001545c4432c0] skipping unsupported chunk: ANIM
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] skipping unsupported chunk: ANMF
[webp @ 000001545c4432c0] image data not found
[image2 @ 000001545c441940] Could not find codec parameters for stream 0 (Video: webp, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'file:HALLOWEEN TYPEE #halloweencostume #swat #duo #blonde #brunette #thatassperfectbaby #soccergirls [7432045283686632710].webp':
Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
Stream #0:0: Video: webp, none, 25 fps, 25 tbr, 25 tbn
Stream mapping:
Stream #0:0 -> #0:0 (webp (native) -> png (native))
Press [q] to stop, [?] for help
[webp @ 000001545c469fc0] skipping unsupported chunk: ANIM
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] skipping unsupported chunk: ANMF
[webp @ 000001545c469fc0] image data not found
[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Decoding error: Invalid data found when processing input
[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Decode error rate 1 exceeds maximum 0.666667
[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Task finished with error code: -1145393733 (Error number -1145393733 occurred)
[vist#0:0/webp @ 000001545c4432c0] [dec:webp @ 000001545c44c440] Terminating thread with return code -1145393733 (Error number -1145393733 occurred)
Cannot determine format of input 0:0 after EOF
[vf#0:0 @ 000001545c44ac80] Task finished with error code: -1094995529 (Invalid data found when processing input)
[vf#0:0 @ 000001545c44ac80] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[vost#0:0/png @ 000001545c448c00] Could not open encoder before EOF
[vost#0:0/png @ 000001545c448c00] Task finished with error code: -22 (Invalid argument)
[vost#0:0/png @ 000001545c448c00] Terminating thread with return code -22 (Invalid argument)
[out#0/image2 @ 000001545c467e40] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A
Conversion failed!
ERROR: Postprocessing: Conversion failed!
Traceback (most recent call last):
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3556, in process_info
replace_info_dict(self.post_process(dl_filename, info_dict, files_to_move))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3740, in post_process
info = self.run_all_pps('post_process', info, additional_pps=info.get('__postprocessors'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3722, in run_all_pps
info = self.run_pp(pp, info)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3700, in run_pp
files_to_delete, infodict = pp.run(infodict)
^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\common.py", line 22, in run
ret = func(self, info, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\common.py", line 127, in wrapper
return func(self, info)
^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\embedthumbnail.py", line 84, in run
thumbnail_filename = convertor.convert_thumbnail(thumbnail_filename, 'png')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\ffmpeg.py", line 1107, in convert_thumbnail
self.real_run_ffmpeg(
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\ffmpeg.py", line 367, in real_run_ffmpeg
raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
[ERROR] Failed to process URL: https://www.tiktok.com/@cooperspamsasf/video/7432045283686632710
[debug] Command-line config: ['https://www.tiktok.com/@bris.main/video/7439516415444536606', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@leilaaaaaaaaa34/video/7430073853495299350', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@erindottie/video/7428505324375559457', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@user415387491623/video/7434688554627910968', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@elsa.vikstrom/video/7431528033044942102', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@ellatomine2/video/7440197178603228449', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@elena__blondie/video/7440396119076506912', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@johaanssson/video/7440864222747086112', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] [TikTok] Found universal data for rehydration
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[debug] Invoking http downloader on "https://v19-webapp-prime.tiktok.com/video/tos/useast2a/tos-useast2a-ve-0068-euttp/ok6GJnAQE2q0AFfyAaPQIQDhK0KQBwD1EIcfR4/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1346&bt=673&cs=2&ds=4&eid=256&ft=4fUEKMk88Zmo0bRLZb4jVHCurpWrKsd.&mime_type=video_mp4&qs=15&rc=ZDVnOzplaWlpZzdmNmdpOUBpM3ZuM3Q5cndudzMzZjczM0AxY2A0LzZjNTMxLTAwY2JfYSNgLTBoMmQ0MS5gLS1kMWNzcw%3D%3D&btag=e00088000&expire=1732609546&l=202411260225363935CAF2808D524710A5&ply_type=2&policy=2&signature=c15a759aebb22c7a55843e0c19030be4&tk=tt_chain_token"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -f image2 -pattern_type none -i "file:Ghettooo #fyp #viral #trend [7440864222747086112].webp" -update 1 -movflags +faststart "file:Ghettooo #fyp #viral #trend [7440864222747086112].png"
[debug] ffmpeg version 7.0.2-full_build-www.gyan.dev Copyright (c) 2000-2024 the FFmpeg developers
built with gcc 13.2.0 (Rev5, Built by MSYS2 project)
configuration: --enable-gpl --enable-version3 --enable-static --disable-w32threads --disable-autodetect --enable-fontconfig --enable-iconv --enable-gnutls --enable-libxml2 --enable-gmp --enable-bzlib --enable-lzma --enable-libsnappy --enable-zlib --enable-librist --enable-libsrt --enable-libssh --enable-libzmq --enable-avisynth --enable-libbluray --enable-libcaca --enable-sdl2 --enable-libaribb24 --enable-libaribcaption --enable-libdav1d --enable-libdavs2 --enable-libuavs3d --enable-libxevd --enable-libzvbi --enable-librav1e --enable-libsvtav1 --enable-libwebp --enable-libx264 --enable-libx265 --enable-libxavs2 --enable-libxeve --enable-libxvid --enable-libaom --enable-libjxl --enable-libopenjpeg --enable-libvpx --enable-mediafoundation --enable-libass --enable-frei0r --enable-libfreetype --enable-libfribidi --enable-libharfbuzz --enable-liblensfun --enable-libvidstab --enable-libvmaf --enable-libzimg --enable-amf --enable-cuda-llvm --enable-cuvid --enable-dxva2 --enable-d3d11va --enable-d3d12va --enable-ffnvcodec --enable-libvpl --enable-nvdec --enable-nvenc --enable-vaapi --enable-libshaderc --enable-vulkan --enable-libplacebo --enable-opencl --enable-libcdio --enable-libgme --enable-libmodplug --enable-libopenmpt --enable-libopencore-amrwb --enable-libmp3lame --enable-libshine --enable-libtheora --enable-libtwolame --enable-libvo-amrwbenc --enable-libcodec2 --enable-libilbc --enable-libgsm --enable-libopencore-amrnb --enable-libopus --enable-libspeex --enable-libvorbis --enable-ladspa --enable-libbs2b --enable-libflite --enable-libmysofa --enable-librubberband --enable-libsoxr --enable-chromaprint
libavutil 59. 8.100 / 59. 8.100
libavcodec 61. 3.100 / 61. 3.100
libavformat 61. 1.100 / 61. 1.100
libavdevice 61. 1.100 / 61. 1.100
libavfilter 10. 1.100 / 10. 1.100
libswscale 8. 1.100 / 8. 1.100
libswresample 5. 1.100 / 5. 1.100
libpostproc 58. 1.100 / 58. 1.100
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANIM
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb512c0] image data not found
[image2 @ 000001c6cbb569c0] Could not find codec parameters for stream 0 (Video: webp, none): unspecified size
Consider increasing the value for the 'analyzeduration' (0) and 'probesize' (5000000) options
Input #0, image2, from 'file:Ghettooo #fyp #viral #trend [7440864222747086112].webp':
Duration: 00:00:00.04, start: 0.000000, bitrate: N/A
Stream #0:0: Video: webp, none, 25 fps, 25 tbr, 25 tbn
Stream mapping:
Stream #0:0 -> #0:0 (webp (native) -> png (native))
Press [q] to stop, [?] for help
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANIM
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] skipping unsupported chunk: ANMF
[webp @ 000001c6cbb61cc0] image data not found
[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Decoding error: Invalid data found when processing input
[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Decode error rate 1 exceeds maximum 0.666667
[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Task finished with error code: -1145393733 (Error number -1145393733 occurred)
[vist#0:0/webp @ 000001c6cbb512c0] [dec:webp @ 000001c6cbb59e00] Terminating thread with return code -1145393733 (Error number -1145393733 occurred)
Cannot determine format of input 0:0 after EOF
[vf#0:0 @ 000001c6cbb53140] Task finished with error code: -1094995529 (Invalid data found when processing input)
[vf#0:0 @ 000001c6cbb53140] Terminating thread with return code -1094995529 (Invalid data found when processing input)
[vost#0:0/png @ 000001c6cbb6f7c0] Could not open encoder before EOF
[vost#0:0/png @ 000001c6cbb6f7c0] Task finished with error code: -22 (Invalid argument)
[vost#0:0/png @ 000001c6cbb6f7c0] Terminating thread with return code -22 (Invalid argument)
[out#0/image2 @ 000001c6cbb6ef40] Nothing was written into output file, because at least one of its streams received no packets.
frame= 0 fps=0.0 q=0.0 Lsize= 0KiB time=N/A bitrate=N/A speed=N/A
Conversion failed!
ERROR: Postprocessing: Conversion failed!
Traceback (most recent call last):
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3556, in process_info
replace_info_dict(self.post_process(dl_filename, info_dict, files_to_move))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3740, in post_process
info = self.run_all_pps('post_process', info, additional_pps=info.get('__postprocessors'))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3722, in run_all_pps
info = self.run_pp(pp, info)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\YoutubeDL.py", line 3700, in run_pp
files_to_delete, infodict = pp.run(infodict)
^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\common.py", line 22, in run
ret = func(self, info, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\common.py", line 127, in wrapper
return func(self, info)
^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\embedthumbnail.py", line 84, in run
thumbnail_filename = convertor.convert_thumbnail(thumbnail_filename, 'png')
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\ffmpeg.py", line 1107, in convert_thumbnail
self.real_run_ffmpeg(
File "C:\Users\J\miniconda3\Lib\site-packages\yt_dlp\postprocessor\ffmpeg.py", line 367, in real_run_ffmpeg
raise FFmpegPostProcessorError(stderr.strip().splitlines()[-1])
yt_dlp.postprocessor.ffmpeg.FFmpegPostProcessorError: Conversion failed!
[ERROR] Failed to process URL: https://www.tiktok.com/@johaanssson/video/7440864222747086112
[debug] Command-line config: ['https://www.tiktok.com/@filippasekesan0/video/7440543183844560150', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@elana.maguire15/video/7439872632234708257', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@smostervik/video/7434809831665503520', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@bille.135/video/7439449253501603104', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@kristal.329/video/7435311238092950815', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@johanna_nordstrand/video/7440174704758983969', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@cassidyannpayne/video/7440590041866456362', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@backup_josefinelykk/video/7440092940057267488', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error cp1252 (No VT), screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (pip)
[debug] Python 3.11.9 (CPython AMD64 64bit) - Windows-10-10.0.22631-SP0 (OpenSSL 3.3.2 3 Sep 2024)
[debug] exe versions: ffmpeg 7.0.2-full_build-www.gyan.dev (setts), ffprobe 7.0.2-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.20.0, brotli-1.0.9, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.45.3, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Extracting cookies from: "C:\Users\J\AppData\Roaming\Mozilla\Firefox\Profiles\c2ty66d6.default-release\cookies.sqlite"
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Loading archive file 'F:\\yt-dlp tiktok likes\\archive.txt'
[debug] Command-line config: ['https://www.tiktok.com/@elina.pp3/video/7439466484176391456', '--download-archive', 'F:\\yt-dlp tiktok likes\\archive.txt', '--write-thumbnail', '--embed-thumbnail', '--verbose', '--cookies-from-browser', 'firefox']
```
| null | https://github.com/yt-dlp/yt-dlp/pull/11645 | null | {'base_commit': '4b5eec0aaa7c02627f27a386591b735b90e681a8', 'files': [{'path': 'yt_dlp/extractor/tiktok.py', 'status': 'modified', 'Loc': {"('TikTokBaseIE', '_parse_aweme_video_app', 322)": {'mod': [416, 417, 418, 419, 420, 421, 422, 423, 470]}, "('TikTokBaseIE', '_parse_aweme_video_web', 567)": {'mod': [603, 604, 605, 606, 607]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/tiktok.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | a40b0070c2a00d3ed839897462171a82323aa875 | https://github.com/yt-dlp/yt-dlp/issues/9003 | site-enhancement | [linkedin] yt-dlp see no subtitles but they exist (webvtt) | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_No response_
### Provide a description that is worded well enough to be understood
yt-dlp downloads video fine but tells "no subtitles" for the video which really has them (webvtt, could be downloaded manually).
Related to Linkedin only. Don't know if it's typical at this site / did not check with other LI videos.
OS: Fedora Linux 39, x86_64
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
$ yt-dlp-night -vU --list-subs https://www.linkedin.com/posts/the-mathworks_2_why-use-kalman-filters-activity-7150516916539805696-HSe3
[debug] Command-line config: ['-vU', '--list-subs', 'https://www.linkedin.com/posts/the-mathworks_2_why-use-kalman-filters-activity-7150516916539805696-HSe3']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2024.01.09.232723 from yt-dlp/yt-dlp-nightly-builds [95e82347b] (zip)
[debug] Python 3.12.1 (CPython x86_64 64bit) - Linux-6.6.9-200.fc39.x86_64-x86_64-with-glibc2.38 (OpenSSL 3.1.1 30 May 2023, glibc 2.38)
[debug] exe versions: ffmpeg 6.0.1 (setts), ffprobe 6.0.1
[debug] Optional libraries: Cryptodome-3.19.0, brotli-1.1.0, certifi-2023.05.07, mutagen-1.46.0, requests-2.28.2, sqlite3-3.42.0, urllib3-1.26.18, websockets-11.0.3
[debug] Proxy map: {}
[debug] Request Handlers: urllib
[debug] Loaded 1798 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp-nightly-builds/releases/latest
Latest version: nightly@2024.01.09.232723 from yt-dlp/yt-dlp-nightly-builds
yt-dlp is up to date (nightly@2024.01.09.232723 from yt-dlp/yt-dlp-nightly-builds)
[LinkedIn] Extracting URL: https://www.linkedin.com/posts/the-mathworks_2_why-use-kalman-filters-activity-7150516916539805696-HSe3
[LinkedIn] 2: Downloading webpage
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec:vp9.2(10), channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
2 has no subtitles
```
| null | https://github.com/yt-dlp/yt-dlp/pull/9056 | null | {'base_commit': 'a40b0070c2a00d3ed839897462171a82323aa875', 'files': [{'path': 'yt_dlp/extractor/linkedin.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8, 14, 15], 'mod': [6, 7, 10, 13]}, "('LinkedInIE', '_real_extract', 98)": {'add': [112], 'mod': [102, 103, 104, 105, 107, 117, 118, 119, 121]}, "('LinkedInIE', None, 85)": {'mod': [86, 92, 93, 94]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/linkedin.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | b965087396ddb2d40dfe5bc12391ee000945129d | https://github.com/yt-dlp/yt-dlp/issues/110 | PR-needed | zsh completions are not installed | ## Checklist
- [ ] I'm reporting a broken site support issue
- [x] I've verified that I'm running yt-dlp version **2021.02.24**
- [ ] I've checked that all provided URLs are alive and playable in a browser
- [ ] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar bug reports including closed ones
- [x] I've read bugs section in FAQ
## Description
`python setup.py build` skips the zsh completion file because it expects to find `_yt-dlp` but finds `yt-dlp.zsh` instead.
One possibility is to abandon installing completions via `setup.py` and have them installed via `make` instead, in which case it'd probably be a good idea to have the `yt-dlp` target either call `setup.py` or build the self-executing zip based on a flag.
Another possibilty (which I haven't researched in depth yet) is to try prodding `setup.py` into accepting `yt-dlp.zsh` and renaming it.
In any case, it might be a good idea to have `setup.py` be as declarative as possible, following PEP-517/518. | null | https://github.com/yt-dlp/yt-dlp/pull/114 | null | {'base_commit': 'b965087396ddb2d40dfe5bc12391ee000945129d', 'files': [{'path': 'Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 7, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 64, 105, 108, 110, 113, 115, 118, 126, 139, 140, 141]}}}, {'path': 'devscripts/bash-completion.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [11]}}}, {'path': 'devscripts/fish-completion.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [13]}}}, {'path': 'devscripts/zsh-completion.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [11]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [30, 31]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"devscripts/bash-completion.py",
"devscripts/fish-completion.py",
"setup.py",
"devscripts/zsh-completion.py"
],
"doc": [],
"test": [],
"config": [
"Makefile"
],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 4f08e586553755ab61f64a5ef9b14780d91559a7 | https://github.com/yt-dlp/yt-dlp/issues/4409 | site-bug | ERROR: 03354: An extractor error has occurred. | ### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2022.07.18** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
The command downloads a 26-episode series from Tubi. The entire download proceeds without issue, getting me all 26 episodes. But then it thinks there is a 27th episode, which is when I get the error:
[download] Downloading video 27 of 27
[debug] [TubiTv] Extracting URL: tubitv:03354
[TubiTv] 03354: Downloading JSON metadata
ERROR: 03354: An extractor error has occurred. (caused by KeyError('url')); please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "/home/melissa/.local/lib/python3.8/site-packages/yt_dlp/extractor/common.py", line 644, in extract
ie_result = self._real_extract(url)
File "/home/melissa/.local/lib/python3.8/site-packages/yt_dlp/extractor/tubitv.py", line 76, in _real_extract
url = video_data['url']
KeyError: 'url'
[download] Finished downloading playlist: stargate-infinity
I don't know if this is just some weird issue with the playlist data from Tubi, but I'm reporting it like the error text asked.
[yt-dlp-vU.txt](https://github.com/yt-dlp/yt-dlp/files/9163183/yt-dlp-vU.txt)
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
The output is over 100k lines, so instead of pasting it I attached it as a file.
```
| null | https://github.com/yt-dlp/yt-dlp/pull/4416 | null | {'base_commit': '4f08e586553755ab61f64a5ef9b14780d91559a7', 'files': [{'path': 'yt_dlp/extractor/tubitv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9]}, "('TubiTvShowIE', '_entries', 130)": {'add': [137]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/tubitv.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | c459d45dd4d417fb80a52e1a04e607776a44baa4 | https://github.com/yt-dlp/yt-dlp/issues/6029 | site-bug
patch-available | Chilloutzone: Unable to extract video data | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2023.01.06** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Germany
### Provide a description that is worded well enough to be understood
When trying to download a video from chilloutzone.net - e.g. https://www.chilloutzone.net/video/ordentlich-abgeschuettelt.html - the correct extractor is chosen, but then throws the error "Unable to extract video data" is thrown.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.chilloutzone.net/video/ordentlich-abgeschuettelt.html']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2023.01.06 [6becd25] (win_exe)
[debug] Python 3.8.10 (CPython AMD64 64bit) - Windows-10-10.0.19044-SP0 (OpenSSL 1.1.1k 25 Mar 2021)
[debug] exe versions: ffmpeg 2022-10-13-git-9e8a327e68-full_build-www.gyan.dev (setts), ffprobe 2022-10-13-git-9e8a327e68-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.16.0, brotli-1.0.9, certifi-2022.12.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-10.4
[debug] Proxy map: {}
[debug] Loaded 1760 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2023.01.06, Current version: 2023.01.06
yt-dlp is up to date (2023.01.06)
[Chilloutzone] Extracting URL: https://www.chilloutzone.net/video/ordentlich-abgeschuettelt.html
[Chilloutzone] ordentlich-abgeschuettelt: Downloading webpage
ERROR: [Chilloutzone] ordentlich-abgeschuettelt: Unable to extract video data; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "yt_dlp\extractor\common.py", line 680, in extract
File "yt_dlp\extractor\chilloutzone.py", line 56, in _real_extract
File "yt_dlp\extractor\common.py", line 1264, in _html_search_regex
File "yt_dlp\extractor\common.py", line 1228, in _search_regex
```
| null | https://github.com/yt-dlp/yt-dlp/pull/6445 | null | {'base_commit': 'c459d45dd4d417fb80a52e1a04e607776a44baa4', 'files': [{'path': 'yt_dlp/extractor/chilloutzone.py', 'status': 'modified', 'Loc': {"('ChilloutzoneIE', None, 12)": {'add': [21, 33], 'mod': [13, 15, 25, 32, 36, 37, 38, 40, 42, 43, 44, 45, 46]}, "('ChilloutzoneIE', '_real_extract', 50)": {'add': [54], 'mod': [51, 52, 56, 57, 58, 59, 61, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 75, 76, 77, 79, 80, 81, 82, 84, 85, 91, 92]}, '(None, None, None)': {'mod': [1, 4, 5, 8]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/chilloutzone.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 68be95bd0ca3f76aa63c9812935bd826b3a42e53 | https://github.com/yt-dlp/yt-dlp/issues/6551 | good first issue
site-bug
patch-available | [youku] HTML in error message | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2023.03.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Test link (seems to be some paywall or "member's area"):
https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1
Not a big deal, but thought this should be reported.
Other regular links seem to be fine:
https://v.youku.com/v_show/id_XNTA3MzUyMTUyMA==.html?spm=a2hja.14919748_WEBHOME_NEW.drawer15.d_zj1_3&s=efbfbd4a46efbfbd5975&scm=20140719.manual.19594.show_efbfbd4a46efbfbd5975
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
./yt-dlp -vU "https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1"
[debug] Command-line config: ['-vU', 'https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2023.03.03 [934496428] (zip)
[debug] Python 3.11.2 (CPython arm64 64bit) - macOS-13.2.1-arm64-arm-64bit (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: phantomjs 2.1.1
[debug] Optional libraries: sqlite3-2.6.0
[debug] Proxy map: {}
[debug] Extractor Plugins: SamplePluginIE
[debug] Post-Processor Plugins: SamplePluginPP
[debug] Loaded 1845 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Available version: stable@2023.03.04, Current version: stable@2023.03.03
Current Build Hash: 5a6829509847cbe86cd5200e0e285f154c50416cf28a1b49b341ae1a030a98d6
[debug] Downloading _update_spec from https://github.com/yt-dlp/yt-dlp/releases/latest/download/_update_spec
Updating to stable@2023.03.04 ...
[debug] Downloading yt-dlp from https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp
[debug] Downloading SHA2-256SUMS from https://github.com/yt-dlp/yt-dlp/releases/latest/download/SHA2-256SUMS
Updated yt-dlp to stable@2023.03.04
[debug] Restarting: /opt/homebrew/Cellar/python@3.11/3.11.2_1/Frameworks/Python.framework/Versions/3.11/Resources/Python.app/Contents/MacOS/Python ./yt-dlp -vU 'https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1'
[debug] Command-line config: ['-vU', 'https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2023.03.04 [392389b7d] (zip)
[debug] Python 3.11.2 (CPython arm64 64bit) - macOS-13.2.1-arm64-arm-64bit (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: phantomjs 2.1.1
[debug] Optional libraries: no_Cryptodome-None, sqlite3-2.6.0
[debug] Proxy map: {}
[debug] Extractor Plugins: SamplePluginIE
[debug] Post-Processor Plugins: SamplePluginPP
[debug] Loaded 1787 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Available version: stable@2023.03.04, Current version: stable@2023.03.04
Current Build Hash: 91cad9f121c1f6f0a81b747415c46ecba0ff331ed38cc6433040b4ac7b6e15ca
yt-dlp is up to date (stable@2023.03.04)
[youku] Extracting URL: https://v.youku.com/v_show/id_XNTg4NTg3MjI4MA==.html?spm=a1z3jc.11711052.0.0&isextonly=1
[youku] XNTg4NTg3MjI4MA: Retrieving cna info
[youku] XNTg4NTg3MjI4MA: Downloading JSON metadata
ERROR: [youku] XNTg4NTg3MjI4MA: Youku server reported error -2002: 该视频已经加密,请<font color="#FF0000">输入密码</font>; please report this issue on https://github.com/yt-dlp/yt-dlp/issues?q= , filling out the appropriate issue template. Confirm you are on the latest version using yt-dlp -U
File "./yt-dlp/yt_dlp/extractor/common.py", line 694, in extract
ie_result = self._real_extract(url)
^^^^^^^^^^^^^^^^^^^^^^^
File "./yt-dlp/yt_dlp/extractor/youku.py", line 196, in _real_extract
raise ExtractorError(msg)
```
| null | https://github.com/yt-dlp/yt-dlp/pull/6690 | null | {'base_commit': '68be95bd0ca3f76aa63c9812935bd826b3a42e53', 'files': [{'path': 'yt_dlp/extractor/youku.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}, "('YoukuIE', None, 16)": {'add': [83], 'mod': [29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70]}, "('YoukuIE', '_real_extract', 150)": {'mod': [195]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/youku.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 8e6e3651727b0b85764857fc6329fe5e0a3f00de | https://github.com/yt-dlp/yt-dlp/issues/7520 | enhancement | ValueError: could not find firefox container "XYZ" in containers.json | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a bug unrelated to a specific site
- [X] I've verified that I'm running yt-dlp version **2023.06.22** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
### Provide a description that is worded well enough to be understood
Firefox container support seems to be broken in Linux
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--cookies-from-browser', 'firefox::Gmail at Home', '--verbose']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2023.07.06.133255 [90db9a3c0] (linux_exe)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-6.2.12-surface-x86_64-with-glibc2.36 (OpenSSL 3.1.1 30 May 2023, glibc 2.36)
[debug] exe versions: ffmpeg 5.1.1 (setts), ffprobe 5.1.1, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.18.0, brotli-1.0.9, certifi-2023.05.07, mutagen-1.46.0, sqlite3-2.6.0, websockets-11.0.3
[Cookies] Extracting cookies from firefox
[debug] Extracting cookies from: "/home/jay/.mozilla/firefox/xpzt0btw.default-release/cookies.sqlite"
Traceback (most recent call last):
File "yt_dlp/__main__.py", line 17, in <module>
File "yt_dlp/__init__.py", line 1008, in main
File "yt_dlp/__init__.py", line 962, in _real_main
File "yt_dlp/YoutubeDL.py", line 674, in __init__
File "yt_dlp/YoutubeDL.py", line 3876, in print_debug_header
File "yt_dlp/YoutubeDL.py", line 3920, in _setup_opener
File "yt_dlp/cookies.py", line 106, in load_cookies
File "yt_dlp/cookies.py", line 123, in extract_cookies_from_browser
File "yt_dlp/cookies.py", line 163, in _extract_firefox_cookies
ValueError: could not find firefox container "Gmail at Home" in containers.json
[11419] Failed to execute script '__main__' due to unhandled exception!
```
| null | https://github.com/yt-dlp/yt-dlp/pull/9016 | null | {'base_commit': '8e6e3651727b0b85764857fc6329fe5e0a3f00de', 'files': [{'path': 'yt_dlp/cookies.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "(None, '_extract_firefox_cookies', 117)": {'mod': [125, 127, 129, 131]}, "(None, '_firefox_browser_dir', 185)": {'mod': [185, 187, 189, 190]}, "(None, '_extract_chrome_cookies', 249)": {'mod': [271]}, "(None, '_get_windows_v10_key', 945)": {'mod': [950]}, "(None, '_find_most_recently_used_file', 1052)": {'mod': [1052, 1054, 1056, 1061, 1062]}, "(None, '_is_path', 1075)": {'mod': [1076]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/cookies.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | aebb4f4ba78ec7542416832e9dd5e47788cb12aa | https://github.com/yt-dlp/yt-dlp/issues/4649 | site-request | https://nos.nl/ | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2022.08.08** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Netherlands
### Provide a description that is worded well enough to be understood
https://nos.nl/
Videos from this website (the Dutch BBC) don't work: Unsupported URL..
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version 2022.08.08 [3157158] (win32_exe)
[debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.19044-SP0
[debug] Checking exe version: ffmpeg -bsfs
[debug] Checking exe version: avconv -bsfs
[debug] Checking exe version: ffprobe -bsfs
[debug] Checking exe version: avprobe -bsfs
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.06.15, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3
[debug] Proxy map: {}
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: 2022.08.08, Current version: 2022.08.08
yt-dlp is up to date (2022.08.08)
[debug] [generic] Extracting URL: https://nos.nl/nieuwsuur/artikel/2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen
[generic] 2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] 2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen: Extracting information
[debug] Looking for Brightcove embeds
[debug] Looking for embeds
ERROR: Unsupported URL: https://nos.nl/nieuwsuur/artikel/2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1441, in wrapper
File "yt_dlp\YoutubeDL.py", line 1517, in __extract_info
File "yt_dlp\extractor\common.py", line 666, in extract
File "yt_dlp\extractor\generic.py", line 3077, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://nos.nl/nieuwsuur/artikel/2440353-verzakking-door-droogte-dreigt-tot-een-miljoen-kwetsbare-huizen
```
| null | https://github.com/yt-dlp/yt-dlp/pull/4822 | null | {'base_commit': 'aebb4f4ba78ec7542416832e9dd5e47788cb12aa', 'files': [{'path': 'yt_dlp/extractor/_extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1182]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/_extractors.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 2530b68d4476fe6cb4b25897b906cbb1774ca7c9 | https://github.com/yt-dlp/yt-dlp/issues/5209 | site-request | Genius.com support request | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I'm running yt-dlp version **2022.10.04** ([update instructions](https://github.com/yt-dlp/yt-dlp#update)) or later (specify commit)
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
United states
### Example URLs
https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly
https://genius.com/videos/Breaking-down-drakes-certified-lover-boy-kanye-beef-way-2-sexy-cudi
### Provide a description that is worded well enough to be understood
yt-dlp can't extract audio nor video from the site
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly'] [debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8 [debug] yt-dlp version 2022.10.04 [4e0511f] (win32_exe) [debug] Python 3.8.10 (CPython 64bit) - Windows-10-10.0.19044-SP0 [debug] Checking exe version: ffmpeg -bsfs [debug] Checking exe version: ffprobe -bsfs [debug] exe versions: ffmpeg 2022-08-13-git-c469c3c3b1-full_build-www.gyan.dev (setts), ffprobe 2022-08-13-git-c469c3c3b1-full_build-www.gyan.dev [debug] Optional libraries: Cryptodome-3.15.0, brotli-1.0.9, certifi-2022.09.24, mutagen-1.45.1, sqlite3-2.6.0, websockets-10.3 [debug] Proxy map: {} [debug] Loaded 1690 extractors [debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest Latest version: 2022.10.04, Current version: 2022.10.04 yt-dlp is up to date (2022.10.04) [debug] [generic] Extracting URL: https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly [generic] Vince-staples-breaks-down-the-meaning-of-when-sparks-fly: Downloading webpage WARNING: [generic] Falling back on generic information extractor [generic] Vince-staples-breaks-down-the-meaning-of-when-sparks-fly: Extracting information [debug] Looking for Brightcove embeds [debug] Looking for embeds ERROR: Unsupported URL: https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly Traceback (most recent call last): File "yt_dlp\YoutubeDL.py", line 1477, in wrapper File "yt_dlp\YoutubeDL.py", line 1553, in __extract_info File "yt_dlp\extractor\common.py", line 672, in extract File "yt_dlp\extractor\generic.py", line 3062, in _real_extract yt_dlp.utils.UnsupportedError: Unsupported URL: https://genius.com/videos/Vince-staples-breaks-down-the-meaning-of-when-sparks-fly
```
| null | https://github.com/yt-dlp/yt-dlp/pull/5221 | null | {'base_commit': '2530b68d4476fe6cb4b25897b906cbb1774ca7c9', 'files': [{'path': 'yt_dlp/extractor/_extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [631]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/_extractors.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | d1c4f6d4da75ac55cf573afe53b1e4a0f776a8f7 | https://github.com/yt-dlp/yt-dlp/issues/982 | geo-blocked | [Broken] TF1.fr multi-language videos: no detection of other languages than French | <!--
######################################################################
WARNING!
IGNORING THE FOLLOWING TEMPLATE WILL RESULT IN ISSUE CLOSED AS INCOMPLETE
######################################################################
-->
## Checklist
<!--
Carefully read and work through this check list in order to prevent the most common mistakes and misuse of yt-dlp:
- First of, make sure you are using the latest version of yt-dlp. Run `yt-dlp --version` and ensure your version is 2021.09.02. If it's not, see https://github.com/yt-dlp/yt-dlp on how to update. Issues with outdated version will be REJECTED.
- Make sure that all provided video/audio/playlist URLs (if any) are alive and playable in a browser.
- Make sure that all URLs and arguments with special characters are properly quoted or escaped as explained in https://github.com/yt-dlp/yt-dlp.
- Search the bugtracker for similar issues: https://github.com/yt-dlp/yt-dlp. DO NOT post duplicates.
- Finally, put x into all relevant boxes like this [x] (Dont forget to delete the empty space)
-->
- [x] I'm reporting a broken site support
- [x] I've verified that I'm running yt-dlp version **2021.09.02**
- [x] I've checked that all provided URLs are alive and playable in a browser
- [x] I've checked that all URLs and arguments with special characters are properly quoted or escaped
- [x] I've searched the bugtracker for similar issues including closed ones
## Verbose log
<!--
Provide the complete verbose output of yt-dlp that clearly demonstrates the problem.
Add the `-v` flag to your command line you run yt-dlp with (`yt-dlp -v <your command line>`), copy the WHOLE output and insert it below. It should look similar to this:
[debug] System config: []
[debug] User config: []
[debug] Command-line args: [u'-v', u'http://www.youtube.com/watch?v=BaW_jenozKc']
[debug] Encodings: locale cp1251, fs mbcs, out cp866, pref cp1251
[debug] yt-dlp version 2021.09.02
[debug] Python version 2.7.11 - Windows-2003Server-5.2.3790-SP2
[debug] exe versions: ffmpeg N-75573-g1d0487f, ffprobe N-75573-g1d0487f, rtmpdump 2.4
[debug] Proxy map: {}
<more lines>
-->
```
yt-dlp -v -F https://www.tf1.fr/tf1/get-the-gringo/videos/kill-the-gringo-86959372.html
[debug] Command-line config: ['-v', '-F', 'https://www.tf1.fr/tf1/get-the-gringo/videos/kill-the-gringo-86959372.html']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] yt-dlp version 2021.09.02 (zip)
[debug] Python version 3.7.3 (CPython 64bit) - Linux-5.10.0-0.bpo.5-amd64-x86_64-with-debian-10.10
[debug] exe versions: ffmpeg 4.1.6-1, ffprobe 4.1.6-1, phantomjs 2.1.1, rtmpdump 2.4
[debug] Optional libraries: mutagen, sqlite
[debug] Proxy map: {}
[debug] [TF1] Extracting URL: https://www.tf1.fr/tf1/get-the-gringo/videos/kill-the-gringo-86959372.html
[TF1] kill-the-gringo-86959372: Downloading JSON metadata
[debug] [wat.tv] Extracting URL: wat:13802773
[wat.tv] 13802773: Downloading JSON metadata
[wat.tv] 13802773: Downloading MPD manifest
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[info] Available formats for 13802773:
ID EXT RESOLUTION FPS | TBR PROTO | VCODEC VBR ACODEC ABR ASR MORE INFO
--------------------- --- ---------- --- - ----- ----- - ----------- ----- --------- ---- ------- --------------------------
dash-audio_fra=64000 m4a audio only | 64k dash | mp4a.40.2 64k 48000Hz [fr], DASH audio, m4a_dash
dash-audio_fra=128000 m4a audio only | 128k dash | mp4a.40.2 128k 48000Hz [fr], DASH audio, m4a_dash
dash-video=200033 mp4 416x234 25 | 200k dash | avc1.42C01E 200k DASH video, mp4_dash
dash-video=400072 mp4 480x270 25 | 400k dash | avc1.42C01E 400k DASH video, mp4_dash
dash-video=600100 mp4 640x360 25 | 600k dash | avc1.42C01E 600k DASH video, mp4_dash
dash-video=1200222 mp4 1024x576 25 | 1200k dash | avc1.4D401F 1200k DASH video, mp4_dash
dash-video=1700265 mp4 1024x576 25 | 1700k dash | avc1.4D401F 1700k DASH video, mp4_dash
dash-video=2500406 mp4 1280x720 25 | 2500k dash | avc1.4D401F 2500k DASH video, mp4_dash
yt-dlp -v -F "https://das-q1-ssl.tf1.fr/2/USP-0x0/27/73/13802773/ssm/6a2603d6515db50912cfb89b775467977b553a21ca27ae131180d424ecab3a73.ism/13802773.mpd?e=1631733432&max_bitrate=2700000&st=uIejdE925vcQ9GqvWS9w_Q"
[debug] Command-line config: ['-v', '-F', 'https://das-q1-ssl.tf1.fr/2/USP-0x0/27/73/13802773/ssm/6a2603d6515db50912cfb89b775467977b553a21ca27ae131180d424ecab3a73.ism/13802773.mpd?e=1631733432&max_bitrate=2700000&st=uIejdE925vcQ9GqvWS9w_Q']
[debug] Encodings: locale UTF-8, fs utf-8, out UTF-8, pref UTF-8
[debug] yt-dlp version 2021.09.02 (zip)
[debug] Python version 3.7.3 (CPython 64bit) - Linux-5.10.0-0.bpo.5-amd64-x86_64-with-debian-10.10
[debug] exe versions: ffmpeg 4.1.6-1, ffprobe 4.1.6-1, phantomjs 2.1.1, rtmpdump 2.4
[debug] Optional libraries: mutagen, sqlite
[debug] Proxy map: {}
[debug] [generic] Extracting URL: https://das-q1-ssl.tf1.fr/2/USP-0x0/27/73/13802773/ssm/6a2603d6515db50912cfb89b775467977b553a21ca27ae131180d424ecab3a73.ism/13802773.mpd?e=1631733432&max_bitrate=2700000&st=uIejdE925vcQ9GqvWS9w_Q
[generic] 13802773: Requesting header
WARNING: [generic] Falling back on generic information extractor.
[generic] 13802773: Downloading webpage
[generic] 13802773: Extracting information
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, vcodec:vp9.2(10), acodec, filesize, fs_approx, tbr, vbr, abr, asr, proto, vext, aext, hasaud, source, id
[info] Available formats for 13802773:
ID EXT RESOLUTION FPS | TBR PROTO | VCODEC VBR ACODEC ABR ASR MORE INFO
---------------- --- ---------- --- - ----- ----- - ----------- ----- --------- ---- ------- --------------------------
audio_eng=64000 m4a audio only | 64k dash | mp4a.40.2 64k 48000Hz [en], DASH audio, m4a_dash
audio_fra=64000 m4a audio only | 64k dash | mp4a.40.2 64k 48000Hz [fr], DASH audio, m4a_dash
audio_eng=128000 m4a audio only | 128k dash | mp4a.40.2 128k 48000Hz [en], DASH audio, m4a_dash
audio_fra=128000 m4a audio only | 128k dash | mp4a.40.2 128k 48000Hz [fr], DASH audio, m4a_dash
video=200033 mp4 416x234 25 | 200k dash | avc1.42C01E 200k DASH video, mp4_dash
video=400072 mp4 480x270 25 | 400k dash | avc1.42C01E 400k DASH video, mp4_dash
video=600100 mp4 640x360 25 | 600k dash | avc1.42C01E 600k DASH video, mp4_dash
video=1200222 mp4 1024x576 25 | 1200k dash | avc1.4D401F 1200k DASH video, mp4_dash
video=1700265 mp4 1024x576 25 | 1700k dash | avc1.4D401F 1700k DASH video, mp4_dash
video=2500406 mp4 1280x720 25 | 2500k dash | avc1.4D401F 2500k DASH video, mp4_dash
```
<!--
Do not remove the above ```
-->
## Description
<!--
Provide an explanation of your issue in an arbitrary form. Provide any additional information, suggested solution and as much context and examples as possible.
If work on your issue requires account credentials please provide them or explain how one can obtain them.
-->
On TF1.fr some videos are available with original audio in addition to the French dub, the latter being the default audio when playing the video in a browser.
However, yt-dlp does not currently detect any other audio streams than the default French, as shown in the first part of the log for the example video located at "https://www.tf1.fr/tf1/get-the-gringo/videos/kill-the-gringo-86959372.html" since all the currently detected audio streams are identified with "audio_fra=XXX" IDs whereas this video also has English audio.
I suspect that the problem is related to the detection or analysis of the mpd manifest of the video since English streams can be detected (and downloaded) when using yt-dlp with the url of the mpd manifest. Indeed, using the browser inspector and especially the network analysis, I have identified that the mpd manifest of the example video is located at "https://das-q1-ssl.tf1.fr/2/USP-0x0/27/73/13802773/ssm/6a2603d6515db50912cfb89b775467977b553a21ca27ae131180d424ecab3a73.ism/13802773.mpd?e=1631733432&max_bitrate=2700000&st=uIejdE925vcQ9GqvWS9w_Q". As shown in the second part of the log, when using yt-dlp with the mpd manifest url, original audio streams (in English) are detected and identified with "audio_eng=XXX" IDs. There is then no problem to download them when using the mpd url.
Additional information on TF1.fr:
- an account may be needed in order to watch the videos with a browser but creating this account only requires providing an email address and password. The account is not needed when using yt-dlp.
- videos are usually geo-restricted to France, as a consequence the use of a proxy may be needed to work on the issue outside of France
- many videos are time-limited (usually 7 days) when they are provided as part of the TV-catchup service but the example video above (and others I could provide if needed) should not have a time limit (or a very long one). | null | https://github.com/yt-dlp/yt-dlp/pull/3739 | null | {'base_commit': 'd1c4f6d4da75ac55cf573afe53b1e4a0f776a8f7', 'files': [{'path': 'yt_dlp/extractor/wat.py', 'status': 'modified', 'Loc': {"('WatIE', '_real_extract', 47)": {'mod': [57]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/wat.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | aa4b0545120becc11a5992384ce52c943da8ead5 | https://github.com/yt-dlp/yt-dlp/issues/1945 | site-bug | SonyLIV Premium Content giving 406 ERROR | ### Checklist
- [X] I'm reporting a broken site
- [X] I've verified that I'm running yt-dlp version **2021.12.01**. ([update instructions](https://github.com/yt-dlp/yt-dlp#update))
- [X] I've checked that all provided URLs are alive and playable in a browser
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/ytdl-org/youtube-dl#video-url-contains-an-ampersand-and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues including closed ones. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
_India_
### Description
Requesting support for SonyLIV to download the latest episodes .
->The content is a subscriber-only episode.
->The content is non DRM, I have verified.
->I have passed cookies from my premium account using --cookies.
Running URL :
https://www.sonyliv.com/shows/kaun-banega-crorepati-1700000195/fighting-all-odds-on-the-hot-seat-1000148334?watch=true
Gives an error 406.
I am running the latest version.
### Verbose log
```shell
[debug] Command-line config: ['https://www.sonyliv.com/shows/kaun-banega-crorepati-1700000195/fighting-all-odds-on-the-hot-seat-1000148334?watch=true', '--cookies', 'sony-cookie.txt', '--verbose']
[debug] Encodings: locale UTF-8, fs utf-8, out utf-8, err utf-8, pref UTF-8
[debug] yt-dlp version 2021.12.01 [91f071af6] (zip)
[debug] Python version 3.9.7 (CPython 64bit) - macOS-11.5.2-x86_64-i386-64bit
[debug] exe versions: ffmpeg 4.4 (setts), ffprobe 4.4, rtmpdump 2.4
[debug] Optional libraries: Cryptodome, mutagen, sqlite, websockets
[debug] Proxy map: {}
[debug] Using fake IP 117.195.44.37 (IN) as X-Forwarded-For
[SonyLIV] Downloading JSON metadata
[debug] [SonyLIV] Extracting URL: https://www.sonyliv.com/shows/kaun-banega-crorepati-1700000195/fighting-all-odds-on-the-hot-seat-1000148334?watch=true
[SonyLIV] 1000148334: Downloading JSON metadata
ERROR: [SonyLIV] 1000148334: Unable to download JSON metadata: HTTP Error 406: Not Acceptable (caused by <HTTPError 406: 'Not Acceptable'>); please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; type yt-dlp -U to update. Be sure to call yt-dlp with the --verbose flag and include its complete output. (caused by <HTTPError 406: 'Not Acceptable'>); please report this issue on https://github.com/yt-dlp/yt-dlp . Make sure you are using the latest version; type yt-dlp -U to update. Be sure to call yt-dlp with the --verbose flag and include its complete output.
File "/usr/local/bin/yt-dlp/yt_dlp/extractor/common.py", line 715, in _request_webpage
return self._downloader.urlopen(url_or_request)
File "/usr/local/bin/yt-dlp/yt_dlp/YoutubeDL.py", line 3385, in urlopen
return self._opener.open(req, timeout=self._socket_timeout)
File "/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 523, in open
response = meth(req, response)
File "/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 632, in http_response
response = self.parent.error(
File "/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 561, in error
return self._call_chain(*args)
File "/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/usr/local/Cellar/python@3.9/3.9.7/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 641, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
```
| null | https://github.com/yt-dlp/yt-dlp/pull/1959 | null | {'base_commit': 'aa4b0545120becc11a5992384ce52c943da8ead5', 'files': [{'path': 'yt_dlp/extractor/sonyliv.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "('SonyLIVIE', '_call_api', 61)": {'add': [69], 'mod': [62, 63, 64, 68]}, "('SonyLIVIE', None, 16)": {'mod': [59]}, "('SonyLIVIE', '_real_initialize', 78)": {'mod': [79]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2由于项目不完善导致的报错",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/sonyliv.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
yt-dlp | yt-dlp | 3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4 | https://github.com/yt-dlp/yt-dlp/issues/9640 | site-request | Support NTS.live | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting a new site support request
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that none of provided URLs [violate any copyrights](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#is-the-website-primarily-used-for-piracy) or contain any [DRM](https://en.wikipedia.org/wiki/Digital_rights_management) to the best of my knowledge
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and am willing to share it if required
### Region
_No response_
### Example URLs
- Single embedded Soundcloud link: https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024
- Single embedded Mixcloud link: https://www.nts.live/shows/absolute-fiction/episodes/absolute-fiction-23rd-july-2022
### Provide a description that is worded well enough to be understood
nts.live is an internet radio site with curated music mixes. As far as I know, the mixes are all hosted on Soundcloud or Mixcloud, and the site simply embeds an instance of one of the latter players.
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['-vU', 'https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024']
[debug] User config "/home/<user>/.yt-dlp/config": ['--no-mtime', '--merge-output-format', 'mp4/mkv']
[debug] Encodings: locale UTF-8, fs utf-8, pref UTF-8, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.03.10 from yt-dlp/yt-dlp [615a84447] (pip)
[debug] Python 3.10.12 (CPython x86_64 64bit) - Linux-<redacted>
[debug] exe versions: ffmpeg 4.4.2 (setts), ffprobe 4.4.2, rtmpdump 2.4
[debug] Optional libraries: Cryptodome-3.17, brotli-1.0.9, certifi-2022.12.07, mutagen-1.46.0, requests-2.31.0, sqlite3-3.37.2, urllib3-2.1.0, websockets-12.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets
[debug] Loaded 1803 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.03.10 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.03.10 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024
[generic] yu-su-2nd-april-2024: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] yu-su-2nd-april-2024: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024
Traceback (most recent call last):
File "/home/<user>/.local/pipx/venvs/yt-dlp/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1594, in wrapper
return func(self, *args, **kwargs)
File "/home/<user>/.local/pipx/venvs/yt-dlp/lib/python3.10/site-packages/yt_dlp/YoutubeDL.py", line 1729, in __extract_info
ie_result = ie.extract(url)
File "/home/<user>/.local/pipx/venvs/yt-dlp/lib/python3.10/site-packages/yt_dlp/extractor/common.py", line 732, in extract
ie_result = self._real_extract(url)
File "/home/<user>/.local/pipx/venvs/yt-dlp/lib/python3.10/site-packages/yt_dlp/extractor/generic.py", line 2530, in _real_extract
raise UnsupportedError(url)
yt_dlp.utils.UnsupportedError: Unsupported URL: https://www.nts.live/shows/yu-su/episodes/yu-su-2nd-april-2024
```
| null | https://github.com/yt-dlp/yt-dlp/pull/9641 | null | {'base_commit': '3e35aa32c74bc108375be8c8b6b3bfc90dfff1b4', 'files': [{'path': 'yt_dlp/extractor/_extractors.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1334]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"yt_dlp/extractor/_extractors.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
comfyanonymous | ComfyUI | 3cd7d84b53724a97c1436f70b6da6975e3d93484 | https://github.com/comfyanonymous/ComfyUI/issues/5627 | Potential Bug | Boolean value of Tensor with more than one value is ambiguous | ### Expected Behavior
Generate image using Pulid with flux model
### Actual Behavior
Stops generation. Few hours earlier everything was fine
### Steps to Reproduce
[Pulid_workglow_v1.json](https://github.com/user-attachments/files/17777510/Pulid_workglow_v1.json)
### Debug Logs
```powershell
# ComfyUI Error Report
## Error Details
- **Node Type:** SamplerCustomAdvanced
- **Exception Type:** RuntimeError
- **Exception Message:** Boolean value of Tensor with more than one value is ambiguous
## Stack Trace
File "/workspace/ComfyUI/execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/workspace/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 633, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 740, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 719, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 624, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/k_diffusion/sampling.py", line 1058, in sample_deis
denoised = model(x_cur, t_cur * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 299, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 706, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 709, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/model_base.py", line 144, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/ldm/flux/model.py", line 181, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-Enhanced/pulidflux.py", line 113, in forward_orig
if node_data['sigma_start'] >= timesteps >= node_data['sigma_end']:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
## System Information
- **ComfyUI Version:** v0.2.7-21-g3b9a6cf
- **Arguments:** main.py --listen 0.0.0.0 --port 3001
- **OS:** posix
- **Python Version:** 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]
- **Embedded Python:** false
- **PyTorch Version:** 2.4.0+cu121
## Devices
- **Name:** cuda:0 NVIDIA RTX A6000 : cudaMallocAsync
- **Type:** cuda
- **VRAM Total:** 51033931776
- **VRAM Free:** 8504171346
- **Torch VRAM Total:** 42177921024
- **Torch VRAM Free:** 44522322
## Logs
```
2024-11-15T15:10:07.506380 - [START] Security scan2024-11-15T15:10:07.506419 -
2024-11-15T15:10:14.304713 - [DONE] Security scan2024-11-15T15:10:14.304749 -
2024-11-15T15:10:14.666461 - ## ComfyUI-Manager: installing dependencies done.2024-11-15T15:10:14.666711 -
2024-11-15T15:10:14.666906 - ** ComfyUI startup time:2024-11-15T15:10:14.667072 - 2024-11-15T15:10:14.667286 - 2024-11-15 15:10:14.6668072024-11-15T15:10:14.667467 -
2024-11-15T15:10:14.667646 - ** Platform:2024-11-15T15:10:14.667827 - 2024-11-15T15:10:14.668015 - Linux2024-11-15T15:10:14.668182 -
2024-11-15T15:10:14.668352 - ** Python version:2024-11-15T15:10:14.668498 - 2024-11-15T15:10:14.668676 - 3.11.9 (main, Apr 6 2024, 17:59:24) [GCC 11.4.0]2024-11-15T15:10:14.668833 -
2024-11-15T15:10:14.668989 - ** Python executable:2024-11-15T15:10:14.669143 - 2024-11-15T15:10:14.669299 - /workspace/ComfyUI/venv/bin/python32024-11-15T15:10:14.669440 -
2024-11-15T15:10:14.669604 - ** ComfyUI Path:2024-11-15T15:10:14.669753 - 2024-11-15T15:10:14.669908 - /workspace/ComfyUI2024-11-15T15:10:14.670052 -
2024-11-15T15:10:14.670240 - ** Log path:2024-11-15T15:10:14.670409 - 2024-11-15T15:10:14.670553 - /workspace/ComfyUI/comfyui.log2024-11-15T15:10:14.670711 -
2024-11-15T15:10:14.695455 -
Prestartup times for custom nodes:2024-11-15T15:10:14.695654 -
2024-11-15T15:10:14.695875 - 0.0 seconds:2024-11-15T15:10:14.696062 - 2024-11-15T15:10:14.696247 - /workspace/ComfyUI/custom_nodes/rgthree-comfy2024-11-15T15:10:14.696411 -
2024-11-15T15:10:14.696598 - 7.2 seconds:2024-11-15T15:10:14.696753 - 2024-11-15T15:10:14.696918 - /workspace/ComfyUI/custom_nodes/ComfyUI-Manager2024-11-15T15:10:14.697090 -
2024-11-15T15:10:14.697257 -
2024-11-15T15:10:17.991611 - Total VRAM 48670 MB, total RAM 1031687 MB
2024-11-15T15:10:17.991921 - pytorch version: 2.4.0+cu121
2024-11-15T15:10:22.112364 - /usr/local/lib/python3.11/dist-packages/xformers/ops/fmha/flash.py:211: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("xformers_flash::flash_fwd")
2024-11-15T15:10:22.789455 - /usr/local/lib/python3.11/dist-packages/xformers/ops/fmha/flash.py:344: FutureWarning: `torch.library.impl_abstract` was renamed to `torch.library.register_fake`. Please use that instead; we will remove `torch.library.impl_abstract` in a future version of PyTorch.
@torch.library.impl_abstract("xformers_flash::flash_bwd")
2024-11-15T15:10:23.157733 - xformers version: 0.0.27.post2
2024-11-15T15:10:23.158121 - Set vram state to: NORMAL_VRAM
2024-11-15T15:10:23.158371 - Device: cuda:0 NVIDIA RTX A6000 : cudaMallocAsync
2024-11-15T15:10:23.467250 - Using xformers cross attention
2024-11-15T15:10:28.313956 - [Prompt Server] web root: /workspace/ComfyUI/web
2024-11-15T15:10:30.360426 - /workspace/ComfyUI/venv/lib/python3.11/site-packages/kornia/feature/lightglue.py:44: FutureWarning: `torch.cuda.amp.custom_fwd(args...)` is deprecated. Please use `torch.amp.custom_fwd(args..., device_type='cuda')` instead.
@torch.cuda.amp.custom_fwd(cast_inputs=torch.float32)
2024-11-15T15:10:31.565415 - Total VRAM 48670 MB, total RAM 1031687 MB
2024-11-15T15:10:31.565871 - pytorch version: 2.4.0+cu121
2024-11-15T15:10:31.566108 - xformers version: 0.0.27.post2
2024-11-15T15:10:31.566488 - Set vram state to: NORMAL_VRAM
2024-11-15T15:10:31.566752 - Device: cuda:0 NVIDIA RTX A6000 : cudaMallocAsync
2024-11-15T15:10:34.269461 - /workspace/ComfyUI/venv/lib/python3.11/site-packages/albumentations/__init__.py:13: UserWarning: A new version of Albumentations is available: 1.4.21 (you have 1.4.15). Upgrade using: pip install -U albumentations. To disable automatic update checks, set the environment variable NO_ALBUMENTATIONS_UPDATE to 1.
check_for_updates()
2024-11-15T15:10:36.030672 - generated new fontManager
2024-11-15T15:10:38.280816 - /workspace/ComfyUI/venv/lib/python3.11/site-packages/timm/models/layers/__init__.py:48: FutureWarning: Importing from timm.models.layers is deprecated, please import via timm.layers
warnings.warn(f"Importing from {__name__} is deprecated, please import via timm.layers", FutureWarning)
2024-11-15T15:10:38.304404 - Nvidia APEX normalization not installed, using PyTorch LayerNorm2024-11-15T15:10:38.304652 -
2024-11-15T15:10:38.637033 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ckpts path: /workspace/ComfyUI/custom_nodes/comfyui_controlnet_aux/ckpts[0m
2024-11-15T15:10:38.637271 - [36;20m[comfyui_controlnet_aux] | INFO -> Using symlinks: False[0m
2024-11-15T15:10:38.637476 - [36;20m[comfyui_controlnet_aux] | INFO -> Using ort providers: ['CUDAExecutionProvider', 'DirectMLExecutionProvider', 'OpenVINOExecutionProvider', 'ROCMExecutionProvider', 'CPUExecutionProvider', 'CoreMLExecutionProvider'][0m
2024-11-15T15:10:38.928121 - DWPose: Onnxruntime with acceleration providers detected2024-11-15T15:10:38.928314 -
2024-11-15T15:10:39.359309 -
2024-11-15T15:10:39.359585 - [92m[rgthree-comfy] Loaded 42 exciting nodes. 🎉[00m2024-11-15T15:10:39.359786 -
2024-11-15T15:10:39.359987 -
2024-11-15T15:10:45.309073 - [34mWAS Node Suite: [0mOpenCV Python FFMPEG support is enabled[0m2024-11-15T15:10:45.309329 -
2024-11-15T15:10:45.309623 - [34mWAS Node Suite [93mWarning: [0m`ffmpeg_bin_path` is not set in `/workspace/ComfyUI/custom_nodes/was-node-suite-comfyui/was_suite_config.json` config file. Will attempt to use system ffmpeg binaries if available.[0m2024-11-15T15:10:45.309812 -
2024-11-15T15:10:48.681237 - [34mWAS Node Suite: [0mFinished.[0m [32mLoaded[0m [0m218[0m [32mnodes successfully.[0m2024-11-15T15:10:48.681597 -
2024-11-15T15:10:48.681918 -
[3m[93m"Every artist was first an amateur."[0m[3m - Ralph Waldo Emerson[0m
2024-11-15T15:10:48.682130 -
2024-11-15T15:10:50.489733 - [Crystools [0;32mINFO[0m] Crystools version: 1.21.0
2024-11-15T15:10:50.667784 - [Crystools [0;32mINFO[0m] CPU: Intel(R) Xeon(R) Gold 6238R CPU @ 2.20GHz - Arch: x86_64 - OS: Linux 6.5.0-41-generic
2024-11-15T15:10:50.668198 - [Crystools [0;32mINFO[0m] Pynvml (Nvidia) initialized.
2024-11-15T15:10:50.668654 - [Crystools [0;32mINFO[0m] GPU/s:
2024-11-15T15:10:50.668945 - [Crystools [0;32mINFO[0m] 0) NVIDIA RTX A6000
2024-11-15T15:10:50.669186 - [Crystools [0;32mINFO[0m] NVIDIA Driver: 550.54.14
2024-11-15T15:10:50.833539 - Creating new Ultralytics Settings v0.0.6 file ✅
View Ultralytics Settings with 'yolo settings' or at '/root/.config/Ultralytics/settings.json'
Update Settings with 'yolo settings key=value', i.e. 'yolo settings runs_dir=path/to/dir'. For help see https://docs.ultralytics.com/quickstart/#ultralytics-settings.
2024-11-15T15:10:51.536848 - ### Loading: ComfyUI-Impact-Pack (V7.11.3)2024-11-15T15:10:51.537021 -
2024-11-15T15:10:51.635434 - ### Loading: ComfyUI-Impact-Pack (Subpack: V0.8)2024-11-15T15:10:51.635602 -
2024-11-15T15:10:51.818086 - [Impact Pack] Wildcards loading done.2024-11-15T15:10:51.818259 -
2024-11-15T15:10:51.833380 - ### Loading: ComfyUI-Manager (V2.51.9)2024-11-15T15:10:51.833510 -
2024-11-15T15:10:51.998868 - ### ComfyUI Revision: 2829 [3b9a6cf2] | Released on '2024-11-13'2024-11-15T15:10:51.999022 -
2024-11-15T15:10:52.010212 -
Import times for custom nodes:
2024-11-15T15:10:52.010417 - 0.0 seconds: /workspace/ComfyUI/custom_nodes/websocket_image_save.py
2024-11-15T15:10:52.010570 - 0.0 seconds: /workspace/ComfyUI/custom_nodes/cg-use-everywhere
2024-11-15T15:10:52.010699 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/comfy-image-saver
2024-11-15T15:10:52.010839 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI_UltimateSDUpscale
2024-11-15T15:10:52.010980 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI_essentials
2024-11-15T15:10:52.011120 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-GGUF
2024-11-15T15:10:52.011264 - 0.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Custom-Scripts
2024-11-15T15:10:52.011382 - 0.2 seconds: /workspace/ComfyUI/custom_nodes/rgthree-comfy
2024-11-15T15:10:52.011516 - 0.2 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Manager
2024-11-15T15:10:52.011641 - 0.3 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Impact-Pack
2024-11-15T15:10:52.011771 - 0.3 seconds: /workspace/ComfyUI/custom_nodes/comfyui_controlnet_aux
2024-11-15T15:10:52.011923 - 0.7 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-KJNodes
2024-11-15T15:10:52.012058 - 0.9 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-AdvancedLivePortrait
2024-11-15T15:10:52.012186 - 2.0 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-Crystools
2024-11-15T15:10:52.012301 - 6.1 seconds: /workspace/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-Enhanced
2024-11-15T15:10:52.012421 - 9.3 seconds: /workspace/ComfyUI/custom_nodes/was-node-suite-comfyui
2024-11-15T15:10:52.012541 -
2024-11-15T15:10:52.030777 - Starting server
2024-11-15T15:10:52.031183 - To see the GUI go to: http://0.0.0.0:3001
2024-11-15T15:10:52.067762 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/model-list.json2024-11-15T15:10:52.067937 -
2024-11-15T15:10:52.076078 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/alter-list.json2024-11-15T15:10:52.076222 -
2024-11-15T15:10:52.094295 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/github-stats.json2024-11-15T15:10:52.094425 -
2024-11-15T15:10:52.133768 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/extension-node-map.json2024-11-15T15:10:52.133914 -
2024-11-15T15:10:52.179770 - [ComfyUI-Manager] default cache updated: https://raw.githubusercontent.com/ltdrdata/ComfyUI-Manager/main/custom-node-list.json2024-11-15T15:10:52.179921 -
2024-11-15T15:12:56.132631 - FETCH DATA from: /workspace/ComfyUI/custom_nodes/ComfyUI-Manager/extension-node-map.json2024-11-15T15:12:56.132991 - 2024-11-15T15:12:56.144376 - [DONE]2024-11-15T15:12:56.144542 -
2024-11-15T15:13:14.903062 - got prompt
2024-11-15T15:13:23.615984 - Using xformers attention in VAE
2024-11-15T15:13:23.619955 - Using xformers attention in VAE
2024-11-15T15:13:27.463924 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:27.464130 -
2024-11-15T15:13:27.658365 - find model:2024-11-15T15:13:27.658531 - 2024-11-15T15:13:27.658645 - /workspace/ComfyUI/models/insightface/models/antelopev2/1k3d68.onnx2024-11-15T15:13:27.658802 - 2024-11-15T15:13:27.658921 - landmark_3d_682024-11-15T15:13:27.659051 - 2024-11-15T15:13:27.659182 - ['None', 3, 192, 192]2024-11-15T15:13:27.659319 - 2024-11-15T15:13:27.659442 - 0.02024-11-15T15:13:27.659558 - 2024-11-15T15:13:27.659678 - 1.02024-11-15T15:13:27.659787 -
2024-11-15T15:13:27.794506 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:27.794680 -
2024-11-15T15:13:27.806739 - find model:2024-11-15T15:13:27.806880 - 2024-11-15T15:13:27.807020 - /workspace/ComfyUI/models/insightface/models/antelopev2/2d106det.onnx2024-11-15T15:13:27.807171 - 2024-11-15T15:13:27.807288 - landmark_2d_1062024-11-15T15:13:27.807407 - 2024-11-15T15:13:27.807522 - ['None', 3, 192, 192]2024-11-15T15:13:27.807629 - 2024-11-15T15:13:27.807732 - 0.02024-11-15T15:13:27.807836 - 2024-11-15T15:13:27.807968 - 1.02024-11-15T15:13:27.808093 -
2024-11-15T15:13:27.878881 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:27.879064 -
2024-11-15T15:13:27.883868 - find model:2024-11-15T15:13:27.884070 - 2024-11-15T15:13:27.884235 - /workspace/ComfyUI/models/insightface/models/antelopev2/genderage.onnx2024-11-15T15:13:27.884406 - 2024-11-15T15:13:27.884569 - genderage2024-11-15T15:13:27.884718 - 2024-11-15T15:13:27.884867 - ['None', 3, 96, 96]2024-11-15T15:13:27.885012 - 2024-11-15T15:13:27.885142 - 0.02024-11-15T15:13:27.885272 - 2024-11-15T15:13:27.885418 - 1.02024-11-15T15:13:27.885546 -
2024-11-15T15:13:30.280279 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:30.280424 -
2024-11-15T15:13:30.604582 - find model:2024-11-15T15:13:30.604745 - 2024-11-15T15:13:30.604894 - /workspace/ComfyUI/models/insightface/models/antelopev2/glintr100.onnx2024-11-15T15:13:30.605035 - 2024-11-15T15:13:30.605176 - recognition2024-11-15T15:13:30.605303 - 2024-11-15T15:13:30.605458 - ['None', 3, 112, 112]2024-11-15T15:13:30.605586 - 2024-11-15T15:13:30.605717 - 127.52024-11-15T15:13:30.605842 - 2024-11-15T15:13:30.605938 - 127.52024-11-15T15:13:30.606042 -
2024-11-15T15:13:30.805291 - Applied providers: ['CPUExecutionProvider'], with options: {'CPUExecutionProvider': {}}2024-11-15T15:13:30.805496 -
2024-11-15T15:13:30.805866 - find model:2024-11-15T15:13:30.806346 - 2024-11-15T15:13:30.806497 - /workspace/ComfyUI/models/insightface/models/antelopev2/scrfd_10g_bnkps.onnx2024-11-15T15:13:30.806641 - 2024-11-15T15:13:30.806772 - detection2024-11-15T15:13:30.806890 - 2024-11-15T15:13:30.807067 - [1, 3, '?', '?']2024-11-15T15:13:30.807207 - 2024-11-15T15:13:30.807728 - 127.52024-11-15T15:13:30.808817 - 2024-11-15T15:13:30.808966 - 128.02024-11-15T15:13:30.809095 -
2024-11-15T15:13:30.809248 - set det-size:2024-11-15T15:13:30.809382 - 2024-11-15T15:13:30.809514 - (640, 640)2024-11-15T15:13:30.809634 -
2024-11-15T15:13:30.810521 - Loaded EVA02-CLIP-L-14-336 model config.
2024-11-15T15:13:30.929776 - Shape of rope freq: torch.Size([576, 64])
2024-11-15T15:13:46.076137 - Loading pretrained EVA02-CLIP-L-14-336 weights (eva_clip).
2024-11-15T15:13:48.288170 - incompatible_keys.missing_keys: ['visual.rope.freqs_cos', 'visual.rope.freqs_sin', 'visual.blocks.0.attn.rope.freqs_cos', 'visual.blocks.0.attn.rope.freqs_sin', 'visual.blocks.1.attn.rope.freqs_cos', 'visual.blocks.1.attn.rope.freqs_sin', 'visual.blocks.2.attn.rope.freqs_cos', 'visual.blocks.2.attn.rope.freqs_sin', 'visual.blocks.3.attn.rope.freqs_cos', 'visual.blocks.3.attn.rope.freqs_sin', 'visual.blocks.4.attn.rope.freqs_cos', 'visual.blocks.4.attn.rope.freqs_sin', 'visual.blocks.5.attn.rope.freqs_cos', 'visual.blocks.5.attn.rope.freqs_sin', 'visual.blocks.6.attn.rope.freqs_cos', 'visual.blocks.6.attn.rope.freqs_sin', 'visual.blocks.7.attn.rope.freqs_cos', 'visual.blocks.7.attn.rope.freqs_sin', 'visual.blocks.8.attn.rope.freqs_cos', 'visual.blocks.8.attn.rope.freqs_sin', 'visual.blocks.9.attn.rope.freqs_cos', 'visual.blocks.9.attn.rope.freqs_sin', 'visual.blocks.10.attn.rope.freqs_cos', 'visual.blocks.10.attn.rope.freqs_sin', 'visual.blocks.11.attn.rope.freqs_cos', 'visual.blocks.11.attn.rope.freqs_sin', 'visual.blocks.12.attn.rope.freqs_cos', 'visual.blocks.12.attn.rope.freqs_sin', 'visual.blocks.13.attn.rope.freqs_cos', 'visual.blocks.13.attn.rope.freqs_sin', 'visual.blocks.14.attn.rope.freqs_cos', 'visual.blocks.14.attn.rope.freqs_sin', 'visual.blocks.15.attn.rope.freqs_cos', 'visual.blocks.15.attn.rope.freqs_sin', 'visual.blocks.16.attn.rope.freqs_cos', 'visual.blocks.16.attn.rope.freqs_sin', 'visual.blocks.17.attn.rope.freqs_cos', 'visual.blocks.17.attn.rope.freqs_sin', 'visual.blocks.18.attn.rope.freqs_cos', 'visual.blocks.18.attn.rope.freqs_sin', 'visual.blocks.19.attn.rope.freqs_cos', 'visual.blocks.19.attn.rope.freqs_sin', 'visual.blocks.20.attn.rope.freqs_cos', 'visual.blocks.20.attn.rope.freqs_sin', 'visual.blocks.21.attn.rope.freqs_cos', 'visual.blocks.21.attn.rope.freqs_sin', 'visual.blocks.22.attn.rope.freqs_cos', 'visual.blocks.22.attn.rope.freqs_sin', 'visual.blocks.23.attn.rope.freqs_cos', 'visual.blocks.23.attn.rope.freqs_sin']
2024-11-15T15:13:51.867986 - Loading PuLID-Flux model.
2024-11-15T15:14:01.781958 - model weight dtype torch.bfloat16, manual cast: None
2024-11-15T15:14:01.783274 - model_type FLUX
2024-11-15T15:14:57.687329 - /workspace/ComfyUI/venv/lib/python3.11/site-packages/insightface/utils/transform.py:68: FutureWarning: `rcond` parameter will change to the default of machine precision times ``max(M, N)`` where M and N are the input matrix dimensions.
To use the future default and silence this warning we advise to pass `rcond=None`, to keep using the old, explicitly pass `rcond=-1`.
P = np.linalg.lstsq(X_homo, Y)[0].T # Affine matrix. 3 x 4
2024-11-15T15:15:17.860517 - Requested to load FluxClipModel_
2024-11-15T15:15:17.860863 - Loading 1 new model
2024-11-15T15:16:01.924433 - loaded completely 0.0 9320.35888671875 True
2024-11-15T15:16:02.546263 - Requested to load ControlNetFlux
2024-11-15T15:16:02.546512 - Requested to load Flux
2024-11-15T15:16:02.546655 - Loading 2 new models
2024-11-15T15:16:04.190233 - loaded completely 0.0 6297.97265625 True
2024-11-15T15:16:11.191997 - loaded completely 0.0 23500.488403320312 True
2024-11-15T15:16:11.285167 -
0%| | 0/25 [00:00<?, ?it/s]2024-11-15T15:16:11.375186 - Requested to load AutoencodingEngine
2024-11-15T15:16:11.375480 - Loading 1 new model
2024-11-15T15:16:11.542628 - loaded completely 0.0 159.87335777282715 True
2024-11-15T15:16:12.035048 -
0%| | 0/25 [00:00<?, ?it/s]2024-11-15T15:16:12.035194 -
2024-11-15T15:16:12.038298 - !!! Exception during processing !!! Boolean value of Tensor with more than one value is ambiguous
2024-11-15T15:16:12.058483 - Traceback (most recent call last):
File "/workspace/ComfyUI/execution.py", line 323, in execute
output_data, output_ui, has_subgraph = get_output_data(obj, input_data_all, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/execution.py", line 198, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/execution.py", line 169, in _map_node_over_list
process_inputs(input_dict, i)
File "/workspace/ComfyUI/execution.py", line 158, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy_extras/nodes_custom_sampler.py", line 633, in sample
samples = guider.sample(noise.generate_noise(latent), latent_image, sampler, sigmas, denoise_mask=noise_mask, callback=callback, disable_pbar=disable_pbar, seed=noise.seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 740, in sample
output = self.inner_sample(noise, latent_image, device, sampler, sigmas, denoise_mask, callback, disable_pbar, seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 719, in inner_sample
samples = sampler.sample(self, sigmas, extra_args, callback, noise, latent_image, denoise_mask, disable_pbar)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 624, in sample
samples = self.sampler_function(model_k, noise, sigmas, extra_args=extra_args, callback=k_callback, disable=disable_pbar, **self.extra_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/k_diffusion/sampling.py", line 1058, in sample_deis
denoised = model(x_cur, t_cur * s_in, **extra_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 299, in __call__
out = self.inner_model(x, sigma, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 706, in __call__
return self.predict_noise(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 709, in predict_noise
return sampling_function(self.inner_model, x, timestep, self.conds.get("negative", None), self.conds.get("positive", None), self.cfg, model_options=model_options, seed=seed)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 279, in sampling_function
out = calc_cond_batch(model, conds, x, timestep, model_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/samplers.py", line 228, in calc_cond_batch
output = model.apply_model(input_x, timestep_, **c).chunk(batch_chunks)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/model_base.py", line 144, in apply_model
model_output = self.diffusion_model(xc, t, context=context, control=control, transformer_options=transformer_options, **extra_conds).float()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1553, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/dist-packages/torch/nn/modules/module.py", line 1562, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/comfy/ldm/flux/model.py", line 181, in forward
out = self.forward_orig(img, img_ids, context, txt_ids, timestep, y, guidance, control)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/ComfyUI/custom_nodes/ComfyUI-PuLID-Flux-Enhanced/pulidflux.py", line 113, in forward_orig
if node_data['sigma_start'] >= timesteps >= node_data['sigma_end']:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
2024-11-15T15:16:12.062181 - Prompt executed in 170.11 seconds
```
## Attached Workflow
Please make sure that workflow does not contain any sensitive information such as API keys or passwords.
```
Workflow too large. Please manually upload the workflow from local file system.
```
## Additional Context
(Please add any additional context or steps to reproduce the error here)
```
### Other
_No response_ | null | https://github.com/comfyanonymous/ComfyUI/pull/27 | null | {'base_commit': '3cd7d84b53724a97c1436f70b6da6975e3d93484', 'files': [{'path': 'webshit/index.html', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [274, 275, 276, 277, 278, 279, 287, 288, 289, 290, 291, 292, 293, 294, 295, 296, 297, 299, 301, 302, 303, 304, 305, 307, 308, 311, 312, 313, 315, 316, 318, 319, 321, 322, 325]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"webshit/index.html"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
comfyanonymous | ComfyUI | f7695b5f9e007136da72bd3e79d601e2814a3890 | https://github.com/comfyanonymous/ComfyUI/issues/5890 | Feature | Support wildcard type "*" in ComfyUI core | ### Feature Idea
There are many custom nodes that currently hacking the string comparison to achieve wildcard type ("*"). This implementation is very hacky and hard to debug. We should properly support wildcard types in ComfyUI core.
### Existing Solutions
- https://github.com/pythongosssss/ComfyUI-Custom-Scripts/blob/d6657cc1f04539dbeea38d7bf6d73bc025004fa4/py/repeater.py
- https://github.com/FredBill1/comfyui-fb-utils/blob/main/core/types.py
### Other
_No response_ | null | https://github.com/comfyanonymous/ComfyUI/pull/5900 | null | {'base_commit': 'f7695b5f9e007136da72bd3e79d601e2814a3890', 'files': [{'path': 'execution.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}, "(None, 'validate_inputs', 531)": {'mod': [592, 593]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"execution.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
comfyanonymous | ComfyUI | f81dbe26e2e363c28ad043db67b59c11bb33f446 | https://github.com/comfyanonymous/ComfyUI/issues/2671 | Feature Request: Support Differential Diffusion for inpainting. | This is a nice alternative to standard inpainting, it allows for the mask to be a gradient for control of strength on top of denoising.
https://github.com/exx8/differential-diffusion | null | https://github.com/comfyanonymous/ComfyUI/pull/2876 | null | {'base_commit': 'f81dbe26e2e363c28ad043db67b59c11bb33f446', 'files': [{'path': 'comfy/samplers.py', 'status': 'modified', 'Loc': {"('KSamplerX0Inpaint', 'forward', 277)": {'add': [278]}}}, {'path': 'nodes.py', 'status': 'modified', 'Loc': {"(None, 'init_custom_nodes', 1936)": {'add': [1963]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"nodes.py",
"comfy/samplers.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
ageitgey | face_recognition | fe421d4acd76e8a19098e942b7bd9c3bbef6ebc4 | https://github.com/ageitgey/face_recognition/issues/242 | imread() got an unexpected keyword argument 'mode' | * face_recognition version: 1.0.0
* Python version: 2.7
* Operating System: mac EI Capitan 10.11.6
### Description
after install the face_recognition, I tried to run examples/facerec_from_webcam_faster.py, but it show error as following:
Traceback (most recent call last):
File "/Users/johnwang/workspace/PycharmProjects/Demo1/Face_Recognizer.py", line 18, in <module>
obama_image = face_recognition.load_image_file("obama.jpg")
File "/Library/Python/2.7/site-packages/face_recognition/api.py", line 81, in load_image_file
return scipy.misc.imread(file, mode=mode)
TypeError: imread() got an unexpected keyword argument 'mode'
and I checked the scipy version and tried to upgrade, and scipy version which I installed is already 1.0.0
johns-MacBook-Pro:kaggle johnwang$ pip install --upgrade scipy
Requirement already up-to-date: scipy in /Library/Python/2.7/site-packages
Requirement already up-to-date: numpy>=1.8.2 in /Library/Python/2.7/site-packages (from scipy)
could you help on this problem? thanks in advance.
| null | https://github.com/ageitgey/face_recognition/pull/383 | null | {'base_commit': 'fe421d4acd76e8a19098e942b7bd9c3bbef6ebc4', 'files': [{'path': 'docs/conf.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [25]}}}, {'path': 'face_recognition/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}}}, {'path': 'face_recognition/api.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}, "(None, 'load_image_file', 76)": {'mod': [84]}}}, {'path': 'face_recognition/cli.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11], 'mod': [6, 7]}, "(None, 'test_image', 42)": {'mod': [46, 47, 48, 49, 50]}}}, {'path': 'setup.cfg', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [17, 18, 28]}}}, {'path': 'tests/test_face_recognition.py', 'status': 'modified', 'Loc': {"('Test_face_recognition', None, 21)": {'add': [248]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"face_recognition/api.py",
"face_recognition/__init__.py",
"docs/conf.py",
"setup.py",
"setup.cfg",
"face_recognition/cli.py"
],
"doc": [],
"test": [
"tests/test_face_recognition.py"
],
"config": [],
"asset": []
} | 1 | |
ageitgey | face_recognition | 8322e7c00b7da9cbde8216c01d42330f03c5dcb9 | https://github.com/ageitgey/face_recognition/issues/59 | PIL/Image.py - ValueError: height and width must be > 0 | * face_recognition version: latest
* Python version: import dlib works for Python 2 and 3
* Operating System: Ubuntu 16.04.2 LTS
### Description
known_people directory has three images of each of four different people
pic1.jpg has 10 unidentified people in it, 2 of which are in known_people
pic2.jpg has 4 unidentified people in it, 1 of which is in known_people
### What I Did
```
Paste the command(s) you ran and the output.
If there was a crash, please include the traceback here.
```
gpu@gpu:~$ face_recognition known_people pic1.jpg
Traceback (most recent call last):
File "/usr/local/bin/face_recognition", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/face_recognition/cli.py", line 66, in main
test_image(image_to_check, known_names, known_face_encodings)
File "/usr/local/lib/python2.7/dist-packages/face_recognition/cli.py", line 40, in test_image
unknown_image = scipy.misc.imresize(unknown_image, scale_factor)
File "/usr/local/lib/python2.7/dist-packages/scipy/misc/pilutil.py", line 490, in imresize
imnew = im.resize(size, resample=func[interp])
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 1645, in resize
return self._new(self.im.resize(size, resample))
ValueError: height and width must be > 0
gpu@gpu:~$ face_recognition known_people pic2.jpg
Traceback (most recent call last):
File "/usr/local/bin/face_recognition", line 11, in <module>
sys.exit(main())
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/dist-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/face_recognition/cli.py", line 66, in main
test_image(image_to_check, known_names, known_face_encodings)
File "/usr/local/lib/python2.7/dist-packages/face_recognition/cli.py", line 40, in test_image
unknown_image = scipy.misc.imresize(unknown_image, scale_factor)
File "/usr/local/lib/python2.7/dist-packages/scipy/misc/pilutil.py", line 490, in imresize
imnew = im.resize(size, resample=func[interp])
File "/usr/local/lib/python2.7/dist-packages/PIL/Image.py", line 1645, in resize
return self._new(self.im.resize(size, resample))
ValueError: height and width must be > 0
| null | https://github.com/ageitgey/face_recognition/pull/65 | null | {'base_commit': '8322e7c00b7da9cbde8216c01d42330f03c5dcb9', 'files': [{'path': 'face_recognition/cli.py', 'status': 'modified', 'Loc': {"(None, 'test_image', 32)": {'mod': [37]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"face_recognition/cli.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
PaddlePaddle | PaddleOCR | 2062b5097ce6800a6dc23fcc1648e128a27d6353 | https://github.com/PaddlePaddle/PaddleOCR/issues/10223 | good first issue
status/close | 🏅️飞桨套件快乐开源常规赛 | ## 活动说明
飞桨套件快乐开源常规赛活动旨在让众多开发者能参与到各大CV/NLP套件的建设工作中(也是我们原有Issue攻关活动的升级版本),包括不限于新增基础功能、论文复现、Issue回复等,任何有利于社区意见流动和问题解决的行为都热切希望大家的参与。让我们共同成长为成为飞桨CV/NLP套件的重要contributors。🎉🎉
在套件快乐开源常规赛活动中,我们会结合技术研讨和任务发布两种活动形式互相促进。任何愿意参与社区贡献(新增代码、Issue解答等),对增长在分割、OCR方向(后续我们会持续开放包括图像检测、部署、图像分类、3D、自然语言处理等方向)知识感兴趣的开发者都可以加入😊。在这个过程中,**让大家保持对各大视觉方向知识的持续积累是我们的不变的主旨**🔥。
## 技术研讨会
为了帮助大家循序渐进地了解、建议、开发飞桨模型方向的开源项目,我们搭建了技术研讨会,参与活动的开发者每周可以参与到飞桨RD分享的技术研讨会中,研讨内容包括不限于:
1. 套件代码结构剖析,read the code。
2. OCR、Segmentation方向算法综述分享。
3. OCR、Segmentation方向前沿论文解读。
4. 讨论新增需求的重要程度,让你的发言推动飞桨套件的发展。
## 活动价值
研讨会学习的知识可以帮助大家参与我们的各项代码和Issue解答任务,任务完成排行榜将在下方每天更新,期待大家的参与。完成任务的贡献者可以获得:
1. 技术提升:学习行业内的新动态新方向,让自己的技术实力得以提升;
2. 荣誉奖励:
a. 成为极具影响力的视觉套件的重要contributor。
b. 获得开源贡献证书、社区曝光度、奖状徽章等;
c. 快乐开源共享奖品,包括PS5,airpods等。
3. 优秀的开源贡献者可以获得实习内推机会,成为飞桨模型套件方向实习生;
## 任务攻克排行榜(Issue解答、代码开发)
| 开发者github id | issue解答数量 | 解答issue 产生的PR数量 (🌟)| 完成命题任务的数量 (:dart:)|
| --- | --- | --- | --- |
| 冲呀呀呀-[livingbody](https://github.com/livingbody) | 41 | 🌟 | :dart: :dart:|
| ToddBear | 11 | | :dart: :dart: |
| 强盛大队-[MINGtoMING](https://github.com/MINGtoMING) | | | :dart: :dart: |
| 曲项向天歌-[Asthestarsfalll](https://github.com/Asthestarsfalll)| 69 | 🌟 🌟 🌟 🌟 🌟 🌟 | :dart: |
| 德布罗意波-[marshall-dteach](https://github.com/marshall-dteach)| 3 | | :dart: |
| flytocc | | | :dart: |
| [Liyulingyue](https://github.com/Liyulingyue) | 2 | 🌟 🌟 |
| 冲锋小队-[Gmgge](https://github.com/Gmgge)| 7 | 🌟 | | |
| 风清扬-[WilliamQf-AI](https://github.com/WilliamQf-AI) | 6 | 🌟 | |
| GreatX-[GreatV](https://github.com/GreatV)| 4 | 🌟 | |
| [kerneltravel](https://github.com/kerneltravel) | 1 | 🌟 | |
| [xu-peng-7](https://github.com/xu-peng-7) | 1 | 🌟 | |
| 明月心-[raoyutian](https://github.com/raoyutian)| 8 | | |
| [bltcn]([bltcn](https://github.com/bltcn)) | 1 | | |
## 任务列表
#### 1. 命题任务(持续更新中):
命题任务是我们经过在 https://github.com/PaddlePaddle/PaddleOCR/issues/10334 进行需求征集、在技术研讨会上经过大家讨论确定重要的需求。欢迎对这些需求也感兴趣的开发者参与到这些任务的开发✌️✌️。在开发过程中,你能进行包括任务分解、代码撰写等工作,还会有飞桨的研发全程和你一起解决可能遇到的问题。还等什么,快来参与吧。🎉🎉
* 做任务流程:
1. 在本条Issue页面进行报名。
2. 加一下飞桨套件研发的微信:transy-k,加入到CV套件建设总群,在完成任务中有任何问题都可以进行反馈,会有模型套件方向的RD进行解答。
3. 完成任务后,在任务对应跟踪Issue页面进行回复完成,RD验收通过后即视作完成,并在当天更新在issue排行榜。
* 任务达成标准:完成尽可能多的任务,完成情况每天都会更新到任务攻克总榜(Issue解答、代码开发),完成命题任务的数量由:dart:认证
* 任务列表
#### 23年Q4任务
| 任务名称</br>(需求提出者) | 任务描述 | tracking issue | mentor | 报名 |
| ------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| MedicalSeg增加滑窗推理功能(@tangshiyu)| 3D医疗图像中缺少滑窗推理推理功能,滑窗推理可以进一步增强任意模型的精度 | [PaddleSeg#3536](https://github.com/PaddlePaddle/PaddleSeg/issues/3536)| @shiyutang | |
|~~新增early stop功能 (@tangshiyu)~~| ~~early stop作为一种正则化的工具,可以用于模型开发的优化过程中,作为新增功能增加paddleseg中| [PaddleSeg#3537](https://github.com/PaddlePaddle/PaddleSeg/issues/3537)~~ | @shiyutang | @ooooo-create (已完成) |
|增加类激活图 (@tangshiyu)| 激活图可视化能够可以帮助理解深度学习模型任务中的决策过程。通过观察模型关注的区域,可以了解模型是如何根据不同区域的特征来进行分类决策的,是一项十分有意义且重要的功能| [PaddleSeg#3538](https://github.com/PaddlePaddle/PaddleSeg/issues/3538) | @shiyutang | |
|增加训练图像、推理图像、标签图像可视化(@Wst-sd)| 飞桨支持强大的训练可视化工具VisualDL,用于记录和监控训练过程,可以在每次模型保存过程中,增加训练图像、推理图像、标签图像可视化,更直观地感受训练效果| [PaddleSeg#3545](https://github.com/PaddlePaddle/PaddleSeg/issues/3545) | @shiyutang | |
|CAT-Seg (CVPR'2023)模型复现(@tangshiyu) | CAT-Seg是open-vocabulary semantic segmentation的前沿模型,其提出了一种cost aggregation方法将CLIP表征应用于像素级分割任务,在多个数据集上达到了开放集分割的SOTA| [PaddleSeg#3535](https://github.com/PaddlePaddle/PaddleSeg/issues/3535) | @shiyutang | |
|VPD模型+下游任务(视觉感知、图像分割、深度估计)(@tangshiyu) | VPD是结合Diffusion Models的图文预训练模型,可以广泛的应用于下游任务,如视觉感知、图像分割、深度估计等等,且均取得了不错的效果。可以将VPD接入PaddleSeg中,并应用于下游任务中| [PaddleSeg#3540](https://github.com/PaddlePaddle/PaddleSeg/issues/3540) | @shiyutang | |
|新增图文对话模型X-GPT (@tangshiyu)| X-Decoder 集成了图像理解的多类任务,结合GPT和SD相关生成模型就可以实现All-in-One的图文对话式agnet| [PaddleSeg#3541](https://github.com/PaddlePaddle/PaddleSeg/issues/3541) | @shiyutang | |
|验证并提升SAM+Clip在语义分割场景下的zero-shot分割精度 (@tangshiyu)| 以语义分割为代表的视觉任务存在泛化性差的问题,即每次在新数据上都需要重新训练。大模型的发展利用图文链接的形式大大提升了模型的泛化性,但是[前沿论文](https://paperswithcode.com/paper/learning-mask-aware-clip-representations-for)对于zero-shot的研究表明,完全的zero-shot的分割精度依旧较低。因此我们借用clip中对zero-shot的定义,即在未见过的图片而非是未见过的类别上,查看CLIP+SAM模型的分割效果(这一定义也十分有实用意义),并借用[前沿论文](https://paperswithcode.com/paper/learning-mask-aware-clip-representations-for)的思想对baseline进一步优化。这一举动将验证并优化语义分割模型在未见过的数据上的泛化性| [PaddleSeg#3542](https://github.com/PaddlePaddle/PaddleSeg/issues/3542) | @shiyutang | |
| 【Bug Fix】humanseg显存泄漏(@enemy1205)| 使用PaddleSeg进行[人像分割](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8/contrib/PP-HumanSeg)时,对大批量数据进行人像分割推理时,内存释放不充分,出现内存堆积问题,触发Linux OOM机制导致程序被kill。 | [PaddleSeg#3543](https://github.com/PaddlePaddle/PaddleSeg/issues/3543) | @shiyutang | |
| 【Bug Fix】modnet推理问题(@munibkhanali)| 使用modnet进行image matting,在将其转换为 paddlelite 兼容模型时,出现报错,具体参考([#3477](https://github.com/PaddlePaddle/PaddleSeg/issues/3477)) | [PaddleSeg#3544](https://github.com/PaddlePaddle/PaddleSeg/issues/3544) | @shiyutang | |
| ~~补充Satrn识别模型文档(@tangshiyu)~~| 新增的Satrn识别模型缺少说明文档,适合开源贡献经历较少的同学了解提交PR过程并熟悉OCR文档 | [PaddleOCR#11131](https://github.com/PaddlePaddle/PaddleOCR/issues/11131) | @shiyutang | @wkml |
| 补充Satrn识别模型TIPC(@tangshiyu)| 新增的Satrn模型缺少TIPC,完成tipc有利于上手训推全流程自动化脚本验证过程 | [PaddleOCR#11133](https://github.com/PaddlePaddle/PaddleOCR/issues/11133) | @shiyutang | |
| 增加多卡评估(@flytocc)| 目前PaddleDetection仅支持单卡评估,希望支持多卡评估 | [PaddleDet#8682](https://github.com/PaddlePaddle/PaddleDetection/issues/8682) | @shiyutang | @MINGtoMING |
| 为PaddleOCR增加训练时周期性验证的开关(@tangshiyu)| 为PaddleOCR增加训练时周期性验证的开关;为PaddleOCR增加eval_epoch_step参数。与PaddleCV的其它基础套件PaddleSeg、PaddleDetection、PaddleClas、Paddle3D等不同,PaddleOCR不支持上述功能,这导致包括但不限于如下问题:用户有时只想要将模型训练一定的迭代轮数,并不希望在训练时进行精度评估(这可能带来额外的时间开销),而目前PaddleOCR无法优雅地满足这个需求,只能通过设定一个较大的eval_batch_step数值来实现。更换数据集后,由于数据集大小发生改变,用户往往也需要修改eval_batch_step配置,以使得eval频率合适。PaddleOCR中实现的是epoch-based trainer,在配置文件中设置的也是epoch_num而不是num_iters,但eval_batch_step却是iters粒度的控制,存在风格不契合的问题。 | [PaddleOCR#11132](https://github.com/PaddlePaddle/PaddleOCR/issues/11132) | @shiyutang | |
#### 23年Q3任务
| 任务名称</br>(需求提出者) | 任务描述 | tracking issue | mentor | 报名 |
| ------------------------------------------------------------ | ------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ |
| ~~文字识别返回单字识别坐标(@EasyIsAllYouNeed @WilliamQf-AI,已完成)~~ | 在文本识别之后,增加对单字位置坐标的返回,可以用于文档比对、合同篡改等大量场景中。 | [PaddleOCR#10377](https://github.com/PaddlePaddle/PaddleOCR/issues/10377) | @shiyutang | @ToddBear #10515 |
|~~套件一致性计划 **任务有更新为两个子任务**(@Bobholamovic )~~ | 各大CV套件目前在依赖库、模型保存路径等问题上存在很多不一致性,导致没有办法达到环境统一,使用知识迁移等效果,体验效果变差。此任务致力解决这个问题,同时解决难度不高,是一个非常适合上手的任务| [PaddleOCR#10380](https://github.com/PaddlePaddle/PaddleOCR/issues/10380) | @shiyutang @Bobholamovic | @livingbody |
| ~~【论文复现】Segment Anything 加速版 MobileSAM(@[qiaoyu1002](https://github.com/qiaoyu1002) (已完成)~~ | 根据原作者提出的issue https://github.com/PaddlePaddle/PaddleSeg/issues/3346, 复现论文[MobileSAM](https://arxiv.org/pdf/2306.14289.pdf)。该模型为火爆的SAM模型的加速版本,大大提升了SAM的使用体验,该模型目前已经有2.9k star,模型、代码已经开源,只需进行前向对齐即可 | [PaddleOCR#10451](https://github.com/PaddlePaddle/PaddleOCR/issues/10451) | @shiyutang | @Asthestarsfalll [PaddleSeg#3349](https://github.com/PaddlePaddle/PaddleSeg/pull/3349) |
| ~~【论文复现】OCR识别模型[Parseq](https://arxiv.org/abs/2207.06966)(@printfxs)(已完成)~~ | 该模型将视觉和语义信息结合,实现精度和速度的双重提升,对比前沿模型SVTR有进一步优势 | [PaddleOCR#10452](https://github.com/PaddlePaddle/PaddleOCR/issues/10452) | @shiyutang | @ToddBear |
|~~【论文复现】检测模型策略--基于PPDET Deformable DETR复现SQR增强策略(@lyuwenyu )~~ | 为Paddledet增加前沿策略SQR,可以应用在多个模型中 | [PaddleDetection#8498](https://github.com/PaddlePaddle/PaddleDetection/issues/8498) | @shiyutang @juncaipeng | @flytocc |
|~~【论文复现】分类模型--多标签分类任务ML-Decoder (@cuicheng01 @zhangyubo0722)(已完成)~~| 该论文提出的可扩展通用分类头在多标签分类、zero-sho以及单标签分类任务上表现出很好的效果。本任务的完成可以扩充PaddleClas多标签分类相关视觉任务,并有众多应用场景。作者团队基于不同数据集验证不同任务的性能,充分证明ML-Decoder分类头的性能以及泛用性。 | [PaddleClas#2896](https://github.com/PaddlePaddle/PaddleClas/issues/2896) | @cuicheng01 @shiyutang | @MINGtoMING |
|【模型压缩推全计划】为六大套件新增模型压缩功能(@shiyutang)| 目前各套件的模型压缩能力参差不齐,而模型压缩作为部署之前的一步,可以在不损害或者少量损害模型精度的情况下,对模型的能耗,速度、大小都有显著的改善。因此为了对各套件的模型压缩进行推全,我们提出了基于PaddleSlim的ACT为各大套件新增模型压缩功能的计划。| [PaddleOCR#10657](https://github.com/PaddlePaddle/PaddleOCR/issues/10657) | @shiyutang | 在issue页面报名 |
| ~~为PaddleSeg添加多标签语义分割的功能(@Wulx2050)~~| 多标签分割是分割中的一个分支,常用于医疗分割中,通过修改分割头和损失函数即可实现。| [PaddleSeg#3456](https://github.com/PaddlePaddle/PaddleSeg/issues/3456) | @shiyutang | @MINGtoMING |
#### 2. Good first issue
* 任务说明:通常是一些对于文档不熟悉、代码运行报错、bug 的修复等,你可以通过完成这个 ISSUE/PR 来踏出贡献代码的第一步。
* 做任务流程:
1. 在本条Issue页面进行报名。
2. 加一下飞桨套件研发的微信:transy-k 加入到CV套件建设总群,在完成任务中有任何问题都可以进行反馈,会有模型套件方向的RD进行解答。
3. 回复issue,认为回答正确后本页面进行回复完成,RD验收通过后即完成一条,并在当天更新在任务完成排行榜。
* 任务达成标准:完成尽可能多的issue,完成情况每天都会更新到任务攻克总榜(Issue解答、代码开发),如果在此基础上额外提出了PR并合入的进行额外加星🌟。
* 任务列表:
1. PaddleOCR Repo: [good first issue](https://github.com/PaddlePaddle/PaddleOCR/issues)
2. PaddleSeg Repo:[good first issue](https://github.com/PaddlePaddle/PaddleSeg/issues?q=is%3Aissue+is%3Aopen+label%3AGoodFirstIssue)
## 报名模版
队伍名:XXX
队伍成员微信昵称:XX
功能描述:(可选)描述想要实现的功能
【提交时补充】issue/PR地址:Github链接
## 💡 欢迎提出你的想法
* 欢迎向套件方向的建设提出你的想法,无论是对各大套件想提出新的需求,还是对我们建设方向的建议,都欢迎踊跃提出你的意见。关于新增需求或问题可以在issue中提出。你的需求和建议也可能成为我们后续发布的任务,大家可以群策群力一起实现。
| null | https://github.com/PaddlePaddle/PaddleOCR/pull/3261 | null | {'base_commit': '2062b5097ce6800a6dc23fcc1648e128a27d6353', 'files': [{'path': 'PPOCRLabel/PPOCRLabel.py', 'status': 'modified', 'Loc': {"('MainWindow', '__init__', 95)": {'add': [400], 'mod': [568]}, "('MainWindow', None, 92)": {'add': [762]}}}, {'path': 'PPOCRLabel/libs/utils.py', 'status': 'modified', 'Loc': {"(None, 'stepsInfo', 162)": {'mod': [190]}}}, {'path': 'PPOCRLabel/resources/strings/strings-zh-CN.properties', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [91]}}}, {'path': 'PPOCRLabel/resources/strings/strings.properties', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [91]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"PPOCRLabel/libs/utils.py",
"PPOCRLabel/PPOCRLabel.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"PPOCRLabel/resources/strings/strings-zh-CN.properties",
"PPOCRLabel/resources/strings/strings.properties"
]
} | 1 |
AntonOsika | gpt-engineer | 19446faaa12743f0a2f729a7beab0e561626f530 | https://github.com/AntonOsika/gpt-engineer/issues/841 | bug
triage | ValueError: Could not parse following text as code edit: | ## Expected Behavior
Improve the code
## Current Behavior
Error gets thrown
## Failure Information
Traceback (most recent call last):
File "/home/riccardo/.local/bin/gpt-engineer", line 8, in <module>
sys.exit(app())
File "/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/cli/main.py", line 169, in main
messages = step(ai, dbs)
File "/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/steps.py", line 588, in improve_existing_code
overwrite_files_with_edits(messages[-1].content.strip(), dbs)
File "/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/chat_to_files.py", line 219, in overwrite_files_with_edits
edits = parse_edits(chat)
File "/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/chat_to_files.py", line 268, in parse_edits
return parse_all_edits(llm_response)
File "/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/chat_to_files.py", line 255, in parse_all_edits
edits.append(parse_one_edit(current_edit))
File "/home/riccardo/.local/lib/python3.10/site-packages/gpt_engineer/core/chat_to_files.py", line 240, in parse_one_edit
raise ValueError(f"Could not parse following text as code edit: \n{text}")
### Steps to Reproduce
I'm using this prompt:
> Improve Code for Readability and Reusability:
>
> Refactor complex functions into smaller, more manageable pieces.
> Use meaningful variable and function names that clearly indicate their purpose.
> Follow a consistent coding style and adhere to best practices outlined in the project's style guide.
> Implement design patterns where applicable to promote code reusability.
> Implement TODOs Where Appropriate:
>
> Review the codebase for any // TODO: comments and prioritize their completion based on the project's goals.
> Assess the impact of each TODO on the current codebase and potential future developments.
> Document the reasoning behind the resolution of TODOs for future reference.
> Add Comments Where Appropriate:
>
> Provide clear and concise comments for complex code blocks to explain the logic and its purpose.
> Update or remove outdated comments that no longer reflect the current state of the code.
> Use comments to outline the steps of complex algorithms or workflows within the code.
>
> Optimize Performance:
>
> Identify bottlenecks and optimize critical sections of the code for better performance.
> Consider the time and space complexity of algorithms and refactor if more efficient solutions exist.
> Utilize profiling tools to measure performance improvements.
> Enhance Security:
>
> Review the code for potential security vulnerabilities and apply best practices to mitigate risks.
> Ensure that all sensitive data is properly encrypted and that secure coding principles are followed.
> Stay updated with the latest security advisories and apply patches or updates as necessary.
>
> Implement Unit Tests and Integration Tests:
>
> Write unit tests for new features and bug fixes to validate individual components.
> Create integration tests to ensure that different parts of the application work together as expected.
> Strive for a high level of test coverage to catch potential issues early.
Also, I got charged :'(

| null | https://github.com/AntonOsika/gpt-engineer/pull/1005 | null | {'base_commit': '19446faaa12743f0a2f729a7beab0e561626f530', 'files': [{'path': 'gpt_engineer/applications/cli/file_selector.py', 'status': 'modified', 'Loc': {"('FileSelector', 'get_current_files', 327)": {'add': [354]}}}, {'path': 'gpt_engineer/core/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [39], 'mod': [2, 4, 5, 6, 7, 9, 10, 11, 12, 13, 15, 16, 17, 18, 19, 21, 22, 23, 24, 25, 26, 27, 28, 29, 35, 36, 38]}, "(None, 'chat_to_files_dict', 43)": {'mod': [45, 47, 48, 50, 51, 52, 53, 55, 56, 57, 58, 60, 66, 69, 72, 75, 78, 81, 84]}, "(None, 'overwrite_code_with_edits', 87)": {'mod': [87, 89, 91, 92, 94, 95, 96, 97, 98, 99, 101, 102, 105, 106, 107, 108, 109, 112]}, "(None, 'parse_edits', 112)": {'mod': [114, 116, 117, 119, 120, 121, 122, 124, 125, 126, 127, 129, 130, 131, 132, 133, 135, 136, 137, 138, 152, 153, 154, 156, 157, 158, 159, 160, 161, 162, 163, 164, 166, 167, 169]}, "(None, 'parse_one_edit', 135)": {'mod': [140, 141, 142, 143, 144, 145, 147, 148, 150]}, "(None, 'apply_edits', 172)": {'mod': [172, 174, 176, 177, 179, 180, 181, 182, 183, 184, 186, 187, 188, 189, 190, 191, 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206]}}}, {'path': 'gpt_engineer/core/default/steps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [28, 29, 47, 48, 49, 50, 51, 59]}, "(None, 'incorrect_edit', 256)": {'mod': [256, 257, 258, 260, 261, 262, 263, 264, 265, 267, 268, 269, 270, 271, 272, 273, 274, 275, 276, 277, 278, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289]}, "(None, 'improve', 292)": {'mod': [306, 328, 332, 334, 335, 339, 341, 344, 345]}}}, {'path': 'gpt_engineer/core/files_dict.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}, "('FilesDict', 'to_chat', 54)": {'mod': [55, 56, 57, 82, 83, 84, 85, 86, 87]}, "('FilesDict', 'format_file_to_input', 55)": {'mod': [59, 60, 62, 63, 64, 65, 66, 67, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80]}}}, {'path': 'gpt_engineer/preprompts/improve', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 2, 4, 5, 6, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 23, 24, 25, 26, 27, 29, 30, 31, 32, 33, 34, 35, 36, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 50, 51, 52, 53, 54, 55, 56, 57, 58, 60, 62, 63, 65, 67, 69, 70, 71, 72, 73, 75, 77, 78, 79, 80]}}}, {'path': 'projects/example-improve/controller.py', 'status': 'modified', 'Loc': {"('Controller', 'handle_input', 9)": {'add': [13, 17], 'mod': [10, 11, 12, 15, 16]}}}, {'path': 'projects/example-improve/prompt', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}, {'path': 'tests/applications/cli/test_main.py', 'status': 'modified', 'Loc': {"('TestMain', 'test_improve_existing_project', 67)": {'mod': [83, 84]}}}, {'path': 'tests/caching_ai.py', 'status': 'modified', 'Loc': {"('CachingAI', 'next', 31)": {'mod': [69, 71]}}}, {'path': 'tests/core/default/test_steps.py', 'status': 'modified', 'Loc': {"('TestImprove', 'test_improve_existing_code', 265)": {'mod': [270, 271, 272, 273, 274, 275, 276, 277]}}}, {'path': 'tests/core/test_chat_to_files.py', 'status': 'modified', 'Loc': {"(None, 'test_parse_with_additional_text', 146)": {'add': [170], 'mod': [159, 161, 162, 163, 164, 165, 166, 167, 168, 169, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181, 182, 183]}, '(None, None, None)': {'add': [217], 'mod': [1, 5, 6, 7, 8, 9, 10, 11, 14, 15, 16, 17, 18, 19]}, "(None, 'test_standard_input', 14)": {'mod': [22, 23, 24, 25, 27, 28, 29, 30, 31, 32, 35, 36, 37, 38, 41, 42, 43, 44, 45, 46, 48, 49, 50, 51, 52, 53, 54, 55, 58, 59, 60, 61, 62, 63, 64, 65, 68, 69, 70, 71, 72, 73, 75, 76, 77, 78, 79, 80, 81, 84, 85, 86, 88, 89, 90, 91, 92, 93, 96, 97, 98, 99, 100, 101, 102, 103, 104, 107, 108, 109, 110, 111, 112, 113, 114, 115, 118, 119, 120, 121, 122, 123, 124, 125, 126, 129, 130, 131, 132, 133, 134, 135, 137, 138, 140, 141, 142, 143, 146, 147, 148, 150, 151, 152, 153, 154, 155, 156]}, "(None, 'test_apply_overwrite_existing_file', 186)": {'mod': [186, 187, 188, 189, 190, 191]}, "(None, 'test_apply_edit_new_file', 194)": {'mod': [194, 195, 196, 197, 198]}, "(None, 'test_apply_edit_no_match', 201)": {'mod': [201, 202, 203, 204, 205, 206]}, "(None, 'test_apply_edit_multiple_matches', 209)": {'mod': [209, 210, 211, 212, 213, 215]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"tests/caching_ai.py",
"gpt_engineer/applications/cli/file_selector.py",
"projects/example-improve/controller.py",
"gpt_engineer/core/files_dict.py",
"gpt_engineer/core/default/steps.py",
"gpt_engineer/core/chat_to_files.py"
],
"doc": [],
"test": [
"tests/core/test_chat_to_files.py",
"tests/core/default/test_steps.py",
"tests/applications/cli/test_main.py"
],
"config": [],
"asset": [
"gpt_engineer/preprompts/improve",
"projects/example-improve/prompt"
]
} | 1 |
AntonOsika | gpt-engineer | a248d8104eeb9deffc8c3819b376bfdcf6f8df83 | https://github.com/AntonOsika/gpt-engineer/issues/205 | good first issue | Run pytest in pre-commit | - Add requirement to pyproject.toml
- Setup `.pre-commit-config.yaml` config
- test that everything is working with `pre-commit run` and in github actions | null | https://github.com/AntonOsika/gpt-engineer/pull/210 | null | {'base_commit': 'a248d8104eeb9deffc8c3819b376bfdcf6f8df83', 'files': [{'path': '.github/workflows/pre-commit.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [13]}}}, {'path': '.pre-commit-config.yaml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5], 'mod': [12, 29, 30, 31, 32]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11], 'mod': [13, 15]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 3, 4], 'mod': [6, 7]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
".pre-commit-config.yaml",
".github/workflows/pre-commit.yaml",
"requirements.txt",
"pyproject.toml"
],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | b27461a871c972ef1c6f080b4608331bc7b01255 | https://github.com/AntonOsika/gpt-engineer/issues/476 | [Feature] Using a open-source LLM instead of Open AI | null | null | https://github.com/AntonOsika/gpt-engineer/pull/639 | null | {'base_commit': 'b27461a871c972ef1c6f080b4608331bc7b01255', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [83]}}}, {'path': 'gpt_engineer/ai.py', 'status': 'modified', 'Loc': {"(None, 'create_chat_model', 342)": {'mod': [368, 370, 371, 372, 373, 374, 375, 376, 377, 383]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/ai.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | bf206a5a1abeaa2b274a799e96933869e02d4c0a | https://github.com/AntonOsika/gpt-engineer/issues/898 | bug | Incompatibility with Python 3.8 and 3.9: TypeError in file_store.py | ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
## Expected Behavior
The project documentation states support for Python versions 3.8 - 3.11. I expect the software to run without syntax errors in these versions.
## Current Behavior
When attempting to run the project in Python 3.9, a TypeError occurs in `file_store.py` due to the use of the union operator `|` in type hints.
## Failure Information
The project uses a syntax feature (`str | Path`) that is only available in Python 3.10 and later, leading to incompatibility with Python 3.8 and 3.9.
### Steps to Reproduce
1. Set up the project in a Python 3.9 environment.
2. Follow the installation and setup instructions.
3. Attempt to run the project, leading to the TypeError in `file_store.py`.
### Failure Logs
```
Traceback (most recent call last):
File ".../Scripts/gpt-engineer", line 3, in <module>
from gpt_engineer.applications.cli.main import app
... (additional traceback)
File ".../file_store.py", line 8, in FileStore
def __init__(self, path: str | Path | None = None):
TypeError: unsupported operand type(s) for |: 'type' and 'type'
```
| null | https://github.com/AntonOsika/gpt-engineer/pull/909 | null | {'base_commit': 'bf206a5a1abeaa2b274a799e96933869e02d4c0a', 'files': [{'path': 'gpt_engineer/applications/cli/cli_agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}, "('CliAgent', 'improve', 125)": {'mod': [126]}}}, {'path': 'gpt_engineer/applications/cli/learning.py', 'status': 'modified', 'Loc': {"(None, 'human_review_input', 92)": {'mod': [92]}}}, {'path': 'gpt_engineer/core/base_execution_env.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "('BaseExecutionEnv', None, 7)": {'mod': [22]}}}, {'path': 'gpt_engineer/core/base_memory.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 4]}}}, {'path': 'gpt_engineer/core/default/disk_execution_env.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4]}, "('DiskExecutionEnv', None, 11)": {'mod': [23, 43]}}}, {'path': 'gpt_engineer/core/default/disk_memory.py', 'status': 'modified', 'Loc': {"('DiskMemory', None, 41)": {'mod': [250]}}}, {'path': 'gpt_engineer/core/default/file_store.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3]}, "('FileStore', None, 8)": {'mod': [9]}}}, {'path': 'gpt_engineer/core/default/simple_agent.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "('SimpleAgent', 'improve', 60)": {'mod': [64]}}}, {'path': 'gpt_engineer/core/files_dict.py', 'status': 'modified', 'Loc': {"('FilesDict', '__setitem__', 20)": {'mod': [34]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/core/default/file_store.py",
"gpt_engineer/applications/cli/learning.py",
"gpt_engineer/core/default/simple_agent.py",
"gpt_engineer/core/default/disk_execution_env.py",
"gpt_engineer/core/files_dict.py",
"gpt_engineer/core/base_execution_env.py",
"gpt_engineer/applications/cli/cli_agent.py",
"gpt_engineer/core/default/disk_memory.py",
"gpt_engineer/core/base_memory.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 7020fea81bef927fe4184e351be12aedf32e7545 | https://github.com/AntonOsika/gpt-engineer/issues/758 | bug
sweep | UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 436: character maps to <undefined> | ## Expected Behavior
gpt-engineer "path" -i command to work properly
## Current Behavior
Error after "Press enter to proceed with modifications."
### Steps to Reproduce
windows
python 3.9
### Failure Logs
Traceback (most recent call last):
File "C:\tools\Anaconda3\envs\gpteng\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\tools\Anaconda3\envs\gpteng\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\tools\Anaconda3\envs\gpteng\Scripts\gpt-engineer.exe\__main__.py", line 7, in <module>
sys.exit(app())
File "C:\tools\Anaconda3\envs\gpteng\lib\site-packages\gpt_engineer\main.py", line 96, in main
messages = step(ai, dbs)
File "C:\tools\Anaconda3\envs\gpteng\lib\site-packages\gpt_engineer\steps.py", line 360, in improve_existing_code
files_info = get_code_strings(dbs.input) # this only has file names not paths
File "C:\tools\Anaconda3\envs\gpteng\lib\site-packages\gpt_engineer\chat_to_files.py", line 113, in get_code_strings
file_data = file.read()
File "C:\tools\Anaconda3\envs\gpteng\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 436: character maps to <undefined>
<details open>
<summary>Checklist</summary>
- [X] ``gpt_engineer/core/chat_to_files.py:get_code_strings`` ✅ Commit [`83c9784`](https://github.com/AntonOsika/gpt-engineer/commit/83c97847c89a1c4336f8c824a6b34aa54de17f33)
</details>
| null | https://github.com/AntonOsika/gpt-engineer/pull/801 | null | {'base_commit': '7020fea81bef927fe4184e351be12aedf32e7545', 'files': [{'path': 'gpt_engineer/core/chat_to_files.py', 'status': 'modified', 'Loc': {"(None, 'get_code_strings', 140)": {'mod': [179]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/core/chat_to_files.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | ebfa59e4f462b1503d9706d3282a6b9751b3dcd7 | https://github.com/AntonOsika/gpt-engineer/issues/754 | bug | the code fails after giving additional information at the questions. | ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
## Expected Behavior
That I get some feedback, e.g. code that I can use.
## Current Behavior
Fails after the additional questions.
## Failure Information
Nothing more to clarify.
Traceback (most recent call last):
File "/Users/tom/Library/Python/3.9/bin/gpt-engineer", line 8, in <module>
sys.exit(app())
File "/Users/tom/Library/Python/3.9/lib/python/site-packages/gpt_engineer/main.py", line 96, in main
messages = step(ai, dbs)
File "/Users/tom/Library/Python/3.9/lib/python/site-packages/gpt_engineer/steps.py", line 192, in gen_clarified_code
messages = AI.deserialize_messages(dbs.logs[clarify.__name__])
File "/Users/tom/Library/Python/3.9/lib/python/site-packages/gpt_engineer/ai.py", line 216, in deserialize_messages
return list(messages_from_dict(json.loads(jsondictstr))) # type: ignore
File "/Users/tom/Library/Python/3.9/lib/python/site-packages/langchain/schema/messages.py", line 351, in messages_from_dict
return [_message_from_dict(m) for m in messages]
File "/Users/tom/Library/Python/3.9/lib/python/site-packages/langchain/schema/messages.py", line 351, in <listcomp>
return [_message_from_dict(m) for m in messages]
File "/Users/tom/Library/Python/3.9/lib/python/site-packages/langchain/schema/messages.py", line 331, in _message_from_dict
return AIMessage(**message["data"])
File "/Users/tom/Library/Python/3.9/lib/python/site-packages/langchain/load/serializable.py", line 90, in __init__
super().__init__(**kwargs)
File "/Users/tom/Library/Python/3.9/lib/python/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for AIMessage
is_chunk
unexpected value; permitted: False (type=value_error.const; given=True; permitted=(False,))
python --version
Python 3.11.6
chatgpt API, 3.5-turbo
Possibly related waring/issue I get is:
Users/tom/Library/Python/3.9/lib/python/site-packages/urllib3/__init__.py:34: NotOpenSSLWarning: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'LibreSSL 2.8.3'. See: https://github.com/urllib3/urllib3/issues/3020
### Steps to Reproduce
If possible, provide detailed steps for reproducing the issue.
1. I have a prompt file (no extension) in a folder
2. I run gpt-engineer folder
3. I get additional questions
4. After that, it fails everytime, tried 3 different folders with different prompts, I once skipped the questions, other times I answered them all.. fail everytime.
### Failure Logs
Any relevant log snippets or files here.
| null | https://github.com/AntonOsika/gpt-engineer/pull/769 | null | {'base_commit': 'ebfa59e4f462b1503d9706d3282a6b9751b3dcd7', 'files': [{'path': 'gpt_engineer/core/ai.py', 'status': 'modified', 'Loc': {"('AI', 'deserialize_messages', 329)": {'mod': [343]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/core/ai.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | dc24bb846464f953e8bb2dbcbcb6ad4faaaeff32 | https://github.com/AntonOsika/gpt-engineer/issues/786 | bug | gpt-engineer doesn't respect the COLLECT_LEARNINGS_OPT_OUT=true env variable | ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
## Expected Behavior
When setting the environment variable COLLECT_LEARNINGS_OPT_OUT=true, no information should be transmitted back to the gpt-engineer developer.
## Current Behavior
Based on viewing the verbose execution output, it's clear that even with that environment variable set, information was transmitted back to the developer. On inspecting the consent methods, such as https://github.com/AntonOsika/gpt-engineer/blob/main/gpt_engineer/cli/learning.py#L172, it's clear that the environment variable is never referenced.
This is highly undesirable, considering that this is the mechanism for opting out of data collection described in the terms of use - https://github.com/AntonOsika/gpt-engineer/blob/main/TERMS_OF_USE.md.
## Failure Information
I've already transmitted too much information to the developer, and don't feel comfortable adding anything more.
| null | https://github.com/AntonOsika/gpt-engineer/pull/806 | null | {'base_commit': 'dc24bb846464f953e8bb2dbcbcb6ad4faaaeff32', 'files': [{'path': 'gpt_engineer/cli/learning.py', 'status': 'modified', 'Loc': {"(None, 'check_consent', 149)": {'add': [161], 'mod': [149, 157, 165, 168]}, "(None, 'human_review_input', 96)": {'mod': [106]}, "(None, 'collect_consent', 172)": {'mod': [172, 173, 174, 175, 177, 178, 179, 180, 181, 182, 183, 184, 186, 187, 188, 189, 190, 191, 194, 195, 196, 198, 199, 200, 201, 202, 203, 204, 205, 206, 207, 208, 209, 210, 211, 212, 213, 215, 216, 218]}}}, {'path': 'gpt_engineer/cli/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [40]}, "(None, 'main', 80)": {'mod': [174]}}}, {'path': 'tests/test_collect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [11]}, "(None, 'test_collect_learnings', 15)": {'mod': [16]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/cli/learning.py",
"gpt_engineer/cli/main.py"
],
"doc": [],
"test": [
"tests/test_collect.py"
],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 2058edb3cfb8764cf642d73035af4bb6c783b7e5 | https://github.com/AntonOsika/gpt-engineer/issues/670 | enhancement
good first issue | Make improve flag less intrusive by moving over files like "all_output.txt" and "file_list" to the .gpteng folder | This is done by simply using the new DB in #665 and writing to it | null | https://github.com/AntonOsika/gpt-engineer/pull/720 | null | {'base_commit': '2058edb3cfb8764cf642d73035af4bb6c783b7e5', 'files': [{'path': 'gpt_engineer/db.py', 'status': 'modified', 'Loc': {"('DBs', None, 118)": {'add': [124]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {"(None, 'main', 27)": {'add': [78], 'mod': [66, 68]}}}, {'path': 'gpt_engineer/steps.py', 'status': 'modified', 'Loc': {"(None, 'set_improve_filelist', 296)": {'mod': [298]}, "(None, 'assert_files_ready', 302)": {'mod': [306, 307]}, "(None, 'get_improve_prompt', 312)": {'mod': [327]}, "(None, 'improve_existing_code', 343)": {'mod': [349]}}}, {'path': 'tests/steps/test_archive.py', 'status': 'modified', 'Loc': {"(None, 'test_archive', 25)": {'mod': [27, 36]}}}, {'path': 'tests/test_collect.py', 'status': 'modified', 'Loc': {"(None, 'test_collect_learnings', 15)": {'mod': [22]}}}, {'path': 'tests/test_db.py', 'status': 'modified', 'Loc': {"(None, 'test_DBs_initialization', 21)": {'add': [36], 'mod': [22]}, "(None, 'test_DBs_dataclass_attributes', 99)": {'add': [113], 'mod': [100]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/db.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [
"tests/steps/test_archive.py",
"tests/test_db.py",
"tests/test_collect.py"
],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | f84754d54ee311146c4f52b5e3ceb0fa8d0b731b | https://github.com/AntonOsika/gpt-engineer/issues/563 | It's only using python... | ## Expected Behavior
I've seen 3 or 4 issues here asking if gpt-engineer could use languages other than python. the answer was always something like "yes, of course, it's chatgpt writing the code, so you can use everything"
## Current Behavior
no matter what i do, it is always using python. even if i explicitly forbid it to use python, and stress it in the clarifications
why?


| null | https://github.com/AntonOsika/gpt-engineer/pull/568 | null | {'base_commit': 'f84754d54ee311146c4f52b5e3ceb0fa8d0b731b', 'files': [{'path': 'gpt_engineer/preprompts/philosophy', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 4, 5, 6, 7]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"gpt_engineer/preprompts/philosophy"
]
} | 1 | |
AntonOsika | gpt-engineer | e55f84041c522b03ce09c958deb9822095b3e84e | https://github.com/AntonOsika/gpt-engineer/issues/943 | documentation | Instructions for running it with local models is lacking. | ## Policy and info
- Maintainers will close issues that have been stale for 14 days if they contain relevant answers.
- Adding the label "sweep" will automatically turn the issue into a coded pull request. Works best for mechanical tasks. More info/syntax at: https://docs.sweep.dev/
## Description
Instructions:
Running the Example[](https://gpt-engineer.readthedocs.io/en/latest/open_models.html#running-the-example)
Once the API is set up, you can find the host and the exposed TCP port by checking your Runpod dashboard.
Then, you can use the port and host to run the following example using WizardCoder-Python-34B hosted on Runpod:
OPENAI_API_BASE=http://<host>:<port>/v1 python -m gpt_engineer.cli.main benchmark/pomodoro_timer --steps benchmark TheBloke_WizardCoder-Python-34B-V1.0-GPTQ
What is this example? What does it do? Whats gpt_engineer.cli.main?
How do i run the main command "gpte projects/my-new-project" after i have a local llm runing on localhost:8000?
## Suggestion
Please provide more step by step instructions.
| null | https://github.com/AntonOsika/gpt-engineer/pull/1082 | null | {'base_commit': '164730a5b933ec0ebc9003c72f60e58176ef0dc6', 'files': [{'path': 'docs/open_models.md', 'status': 'modified', 'Loc': {'(None, None, 17)': {'add': [17]}, '(None, None, 21)': {'add': [21]}, '(None, None, 4)': {'mod': [4]}, '(None, None, 9)': {'mod': [9]}, '(None, None, 12)': {'mod': [12]}, '(None, None, 14)': {'mod': [14]}, '(None, None, 16)': {'mod': [16]}, '(None, None, 19)': {'mod': [19]}}}, {'path': 'gpt_engineer/applications/cli/main.py', 'status': 'modified', 'Loc': {"(None, 'main', 247)": {'add': [474]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/applications/cli/main.py"
],
"doc": [
"docs/open_models.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 29e891c1a7bc6a0a46f8ce9d337a1b4bb82dcf85 | https://github.com/AntonOsika/gpt-engineer/issues/650 | enhancement
good first issue | Fix the "improve" prompt to make sure that it generates diffs, and parse and apply those diffs to the existing codebase | One way to do this is to write the prompt for gpt-engineer with `-i` flag to annotate each codeblock with one of:
1. `NEW CODE`
2. `REPLACING ONE FUNCTION`
If 1., the generated code can just be written to a new file (or appended to an existing file).
If it is replacing an existing function, we could make sure to find the name of the function that is being replaced using an AST parser (see how [here](https://chat.openai.com/share/71012377-7ebb-47f2-a8fc-7d1bfd4fabe2))
## Why this is necessary
As an example, I tried to use it on the project itself and got a codeblock that was just changing one of the function (so it should not be used to overwrite the entire file)
## How to do it
We can take inspiration from Aider, that generates diffs, or sweep in how they prompt for "<copy_lines>" and [parse the GPT4 output here](https://github.com/sweepai/sweep/blob/e384c9fc3e0278257324c4ce57a888fa64f071b7/sweepai/utils/diff.py#L113)
Should be quite straightforward! | null | https://github.com/AntonOsika/gpt-engineer/pull/714 | null | {'base_commit': '29e891c1a7bc6a0a46f8ce9d337a1b4bb82dcf85', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [55, 56, 57, 58, 59, 62, 63, 64, 65]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 8e95858f3867faf1198c0631bd060172991bb523 | https://github.com/AntonOsika/gpt-engineer/issues/872 | enhancement
triage | Default launch command is too cumbersome | ## Policy and info
- good first issue
## Feature description
Currently, to use the tool `gpt-engineer` command has to be used. Although this can be resolved using an alias, would be nice to have a command such as `gpte` be available by default.
Can refer https://clig.dev/#naming for more details.
## Motivation/Application
This feature will make it very user friendly to use the command. Having to type dash (`-`) in `gpt-engineer` command is very cumbersome.
| null | https://github.com/AntonOsika/gpt-engineer/pull/889 | null | {'base_commit': '8e95858f3867faf1198c0631bd060172991bb523', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [74], 'mod': [64, 65, 70, 71]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [62]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"README.md"
],
"test": [],
"config": [
"pyproject.toml"
],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | b60185ac6a02c1366324221eb143c9e37a64f1e6 | https://github.com/AntonOsika/gpt-engineer/issues/718 | Separate `core` and `cli` into separate modules (directories) and only allow cli to import from core | The idea is to separate the core logic and CLI UX specific things. To make it easier to take decisions on what makes sense from UX perspective, and how the core building blocks should work.
Would look something like:
```
gpt_engineer
├── core
│ ├── ai.py
│ ├── domain.py
│ ├── chat_to_files.py
│ ├── steps.py
│ └── db.py
├── cli
│ ├── main.py
│ ├── file_selector.py
│ ├── learning.py
│ └── collect.py
├── api
│ └── main.py
└── preprompts
└── ...
```
One could use either:
- PyCharm "move" automagic functionality
- Or! gpt-engineer by adding new steps and configs, or somehow the existing -i flag | null | https://github.com/AntonOsika/gpt-engineer/pull/766 | null | {'base_commit': 'fb35323551c3404283fdb04297f961a05a587caf', 'files': [{'path': 'evals/evals_existing_code.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14, 15]}}}, {'path': 'evals/evals_new_code.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [14]}}}, {'path': 'gpt_engineer/api.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [4, 5]}}}, {'path': 'gpt_engineer/collect.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [5, 6, 7, 8]}}}, {'path': 'gpt_engineer/file_selector.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [10]}, "('TerminalFileSelector', None, 134)": {'add': [134]}, "('DisplayablePath', None, 16)": {'mod': [18]}, "('TerminalFileSelector', 'display', 143)": {'mod': [145]}, "('TerminalFileSelector', 'ask_for_selection', 173)": {'mod': [175, 178]}}}, {'path': 'gpt_engineer/learning.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [13, 14]}}}, {'path': 'gpt_engineer/main.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [11, 12, 13, 14, 15]}}}, {'path': 'gpt_engineer/ai.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0, 24, 26]}, "('AI', None, 41)": {'add': [41, 188]}, "(None, 'serialize_messages', 430)": {'add': [430]}}}, {'path': 'gpt_engineer/chat_to_files.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [7, 8]}}}, {'path': 'gpt_engineer/db.py', 'status': 'renamed', 'Loc': {'(None, None, None)': {'add': [0]}, "('DB', None, 10)": {'add': [10]}}}, {'path': 'gpt_engineer/domain.py', 'status': 'removed', 'Loc': {}}, {'path': 'gpt_engineer/steps.py', 'status': 'removed', 'Loc': {}}, {'path': 'scripts/rerun_edited_message_logs.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [8, 9]}}}, {'path': 'tests/steps/test_archive.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6]}}}, {'path': 'tests/test_ai.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}}}, {'path': 'tests/test_chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}}}, {'path': 'tests/test_collect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9, 10, 11, 12]}}}, {'path': 'tests/test_db.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"evals/evals_new_code.py",
"gpt_engineer/learning.py",
"gpt_engineer/db.py",
"evals/evals_existing_code.py",
"gpt_engineer/ai.py",
"gpt_engineer/chat_to_files.py",
"gpt_engineer/main.py",
"scripts/rerun_edited_message_logs.py",
"gpt_engineer/api.py",
"gpt_engineer/steps.py",
"gpt_engineer/domain.py",
"gpt_engineer/collect.py",
"gpt_engineer/file_selector.py"
],
"doc": [],
"test": [
"tests/test_ai.py",
"tests/test_chat_to_files.py",
"tests/steps/test_archive.py",
"tests/test_collect.py",
"tests/test_db.py"
],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | ba00896c5673990923abd0e99dba147938871512 | https://github.com/AntonOsika/gpt-engineer/issues/79 | Analysis - Give context of a project to GPT Engineer | GPT Engineer is amazing. But right now the purpose is for small projects, projects where you need little implementations or requirements.
But... What about to give a full context of a project? If ChatGPT can understand what methods and classes has some projects on GitHub or packages in npm, maybe he can have a fully understand of a project and modify parts of it.
What about limits of ChatGPT prompt? We can give some prompts in several windows to give fully understanding of what's going on.
I can work on this if someone has the courage to develop it with me. | null | https://github.com/AntonOsika/gpt-engineer/pull/465 | null | {'base_commit': 'ba00896c5673990923abd0e99dba147938871512', 'files': [{'path': 'gpt_engineer/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "(None, 'to_files', 37)": {'add': [42]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'main', 19)": {'add': [26, 40], 'mod': [62, 63, 67]}}}, {'path': 'gpt_engineer/steps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12, 21, 247, 327], 'mod': [11]}, "('Config', None, 267)": {'add': [277]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/chat_to_files.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | 0596b07a39c2c99c46509c17660f5c8aef4b2114 | https://github.com/AntonOsika/gpt-engineer/issues/388 | good first issue | Remove "run_id" and "delete_existing" options: instead move old memory/workspace folder to "archive" by default | The first step in the main file would be to check for memory folder and workspace, if they exist create a new folder in "archive" e.g. with the name "currentdate_currenttime", and move everything there.
This would make main.py much nicer, and make it clearly defined that all files, apart from `archive` folder, in the project directory are from the most recent run.
(It is also a prerequisite to later add handling of logging to separate files when there are "multiple of the same steps") | null | https://github.com/AntonOsika/gpt-engineer/pull/409 | null | {'base_commit': '0596b07a39c2c99c46509c17660f5c8aef4b2114', 'files': [{'path': 'gpt_engineer/db.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "('DBs', None, 44)": {'add': [49]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {"(None, 'main', 19)": {'add': [53, 59], 'mod': [21, 38, 39, 40, 42, 43, 44, 45]}, '(None, None, None)': {'mod': [3]}}}, {'path': 'gpt_engineer/steps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 1, 2, 260, 282, 291, 299, 308, 315], 'mod': [289, 290]}}}, {'path': 'tests/test_collect.py', 'status': 'modified', 'Loc': {"(None, 'test_collect_learnings', 15)": {'mod': [22]}}}, {'path': 'tests/test_db.py', 'status': 'modified', 'Loc': {"(None, 'test_DBs_initialization', 29)": {'add': [43], 'mod': [30]}, "(None, 'test_DBs_instantiation_with_wrong_number_of_arguments', 102)": {'mod': [109]}, "(None, 'test_DBs_dataclass_attributes', 112)": {'mod': [113]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/db.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [
"tests/test_db.py",
"tests/test_collect.py"
],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | dc7a2bd0f546ea29929faa57b8e618c413c86bb2 | https://github.com/AntonOsika/gpt-engineer/issues/582 | triage | RuntimeError: ('Message exceeds %skb limit. (%s)', AFTER it asks me to run the code | I am running a quite complex prompt to create a python app with a PSQL DB backend. I already have the whole DB schema ready and pasted it into the prompt.
## Expected Behavior
the app is created according to my prompt.
## Current Behavior
Only a part of the files are created, then it asks me to run the code which fails for non-related reasons and then shows the error message"
RuntimeError: ('Message exceeds %skb limit. (%s)', '32', '{\'integrations\': {\'All\': True}, \'anonymousId\': None, \'properties\': {\'model\': \'gpt-3.5-turbo\', \'temperature\': 0.1, \'steps\': \'["clarify", "gen_clarified_code", "gen_entrypoint", "execute_entrypoint", "human_review"]\', \'steps_file_hash\': \'\', \'prompt\'
REMAINING OUTPUT of gpt engineer.
Error seems to be in the analytics page
File "/home/stefan/.local/bin/gpt-engineer", line 8, in <module>
sys.exit(app())
^^^^^
File "/home//code/gpt-engineer/gpt_engineer/main.py", line 61, in main
collect_learnings(model, temperature, steps, dbs)
File "/home/code/gpt-engineer/gpt_engineer/collect.py", line 28, in collect_learnings
send_learning(learnings)
File "/home/code/gpt-engineer/gpt_engineer/collect.py", line 17, in send_learning
rudder_analytics.track(
File "/home/.local/lib/python3.11/site-packages/rudderstack/analytics/__init__.py", line 53, in track
_proxy('track', *args, **kwargs)
File "/home/.local/lib/python3.11/site-packages/rudderstack/analytics/__init__.py", line 113, in _proxy
fn(*args, **kwargs)
File "/home/.local/lib/python3.11/site-packages/rudderstack/analytics/client.py", line 141, in track
return self._enqueue(msg)
^^^^^^^^^^^^^^^^^^
File "/home/.local/lib/python3.11/site-packages/rudderstack/analytics/client.py", line 279, in _enqueue
raise RuntimeError('Message exceeds %skb limit. (%s)', str(int(MAX_MSG_SIZE / 1024)), str(msg))
UPDATE: confirmed related to Rudderstack, does not happen when you opt out
| null | https://github.com/AntonOsika/gpt-engineer/pull/632 | null | {'base_commit': 'dc7a2bd0f546ea29929faa57b8e618c413c86bb2', 'files': [{'path': 'gpt_engineer/collect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "(None, 'send_learning', 11)": {'mod': [31, 32, 33, 34, 35]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/collect.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 66cd09c789bfcae57e144fcaea86050b97230f18 | https://github.com/AntonOsika/gpt-engineer/issues/150 | bug | AttributeError: 'tuple' object has no attribute 'expandtabs' | I'm getting the following error when running `python -m gpt_engineer.main`. I'm using python 3.11/
```
File "/opt/miniconda3/envs/gpt-eng/lib/python3.11/inspect.py", line 873, in cleandoc
lines = doc.expandtabs().split('\n')
^^^^^^^^^^^^^^
AttributeError: 'tuple' object has no attribute 'expandtabs'
``` | null | https://github.com/AntonOsika/gpt-engineer/pull/152 | null | {'base_commit': '66cd09c789bfcae57e144fcaea86050b97230f18', 'files': [{'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {"(None, 'chat', 16)": {'mod': [21]}}}, {'path': 'identity/generate', 'status': 'modified', 'Loc': {}}, {'path': 'scripts/benchmark.py', 'status': 'modified', 'Loc': {"(None, 'main', 13)": {'mod': [33, 53, 61]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/main.py",
"scripts/benchmark.py"
],
"doc": [],
"test": [],
"config": [],
"asset": [
"identity/generate"
]
} | 1 |
AntonOsika | gpt-engineer | 6ccd05ab65dcd83d6057c6c068a3f5290ab09176 | https://github.com/AntonOsika/gpt-engineer/issues/49 | GPT4ALL support or open source models | OpenAI's model 3.5 breaks frequently and is low quality in general.
Falcon, Vicuna, Hermes and more should be supported as they're open source, free, and moving away from paid closed source is good practice and opens applications to huge user base who wants free access to these tools. | null | https://github.com/AntonOsika/gpt-engineer/pull/63 | null | {'base_commit': '6ccd05ab65dcd83d6057c6c068a3f5290ab09176', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {}}, {'path': 'gpt_engineer/ai.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [4], 'mod': [7]}, "('AI', 'next', 42)": {'add': [63], 'mod': [46, 48, 50, 51, 60, 61, 62]}, "('AI', None, 10)": {'mod': [10, 11, 12, 25, 26, 27, 28, 29, 33, 34, 36, 37, 39, 40, 42, 43, 44]}, "('AI', '__init__', 11)": {'mod': [14, 15, 16, 17, 18, 19, 20, 21, 22, 23]}, "('AI', 'start', 25)": {'mod': [31]}}}, {'path': 'gpt_engineer/chat_to_files.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'to_files', 37)": {'mod': [37]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1, 10, 11, 13]}, "(None, 'main', 19)": {'mod': [24, 25, 47, 48, 49, 50, 62]}}}, {'path': 'gpt_engineer/steps.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5, 9], 'mod': [1, 227]}, "(None, 'setup_sys_prompt', 12)": {'mod': [12]}, "(None, 'simple_gen', 16)": {'mod': [16, 17, 18, 19, 20, 21, 22, 23]}, "(None, 'clarify', 26)": {'mod': [26, 28, 30, 31, 33, 35, 39, 42, 45]}, "(None, 'gen_spec', 57)": {'mod': [57, 62, 63, 64, 65, 67, 69]}, "(None, 'respec', 74)": {'mod': [74, 75, 76, 81, 83, 84, 85, 86, 87, 91]}, "(None, 'gen_unit_tests', 95)": {'mod': [95, 99, 100, 101, 102, 103, 105, 107]}, "(None, 'gen_clarified_code', 113)": {'mod': [113, 116, 118, 119, 120, 121, 123]}, "(None, 'gen_code', 127)": {'mod': [127, 130, 131, 132, 133, 134, 135, 136, 137]}, "(None, 'execute_entrypoint', 141)": {'mod': [141, 152, 162]}, "(None, 'gen_entrypoint', 165)": {'mod': [165, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 184]}, "(None, 'use_feedback', 189)": {'mod': [189, 190, 191, 192, 193, 194, 195, 196, 197]}, "(None, 'fix_code', 201)": {'mod': [201, 202, 203, 204, 205, 206, 207, 208, 209, 210]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [11]}}}, {'path': 'requirements.txt', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}}}, {'path': 'scripts/rerun_edited_message_logs.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3], 'mod': [6, 7]}, "(None, 'main', 13)": {'mod': [15, 19, 30, 32]}}}, {'path': 'tests/test_ai.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}, "(None, 'test_ai', 7)": {'mod': [8]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/chat_to_files.py",
"gpt_engineer/ai.py",
"gpt_engineer/main.py",
"scripts/rerun_edited_message_logs.py",
"gpt_engineer/steps.py"
],
"doc": [],
"test": [
"tests/test_ai.py"
],
"config": [
".gitignore",
"pyproject.toml",
"requirements.txt"
],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | dc7a2bd0f546ea29929faa57b8e618c413c86bb2 | https://github.com/AntonOsika/gpt-engineer/issues/530 | Using gpt-engineer with Azure OpenAI |
Hi, I am trying to test gpt-engineer by using Azure OpenAI but I am getting authentication error. I have added all the additional details that are required for the Azure OpenAI like api_base url, model, etc. in the python file ai.py in the gpt_engineer folder. Am I missing out something can you please help me out with this issue.
Have set the openAI API key as the windows environmnet variable. Rest all the steps have followed according to the readme file.
<img width="946" alt="image" src="https://github.com/AntonOsika/gpt-engineer/assets/53396422/d3657e3b-1e49-4f6c-adac-18125ee1f29f">
| null | https://github.com/AntonOsika/gpt-engineer/pull/640 | null | {'base_commit': 'dc7a2bd0f546ea29929faa57b8e618c413c86bb2', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [57]}}}, {'path': 'gpt_engineer/ai.py', 'status': 'modified', 'Loc': {"('AI', '__init__', 40)": {'add': [54], 'mod': [52, 53]}, "(None, 'create_chat_model', 338)": {'add': [353], 'mod': [338]}, '(None, None, None)': {'mod': [13]}, "('AI', None, 39)": {'mod': [40]}}}, {'path': 'gpt_engineer/learning.py', 'status': 'modified', 'Loc': {"(None, 'human_review_input', 54)": {'add': [63], 'mod': [95]}, "(None, 'check_consent', 106)": {'add': [122, 124], 'mod': [106, 113]}}}, {'path': 'gpt_engineer/main.py', 'status': 'modified', 'Loc': {"(None, 'main', 27)": {'add': [39, 55]}}}, {'path': 'gpt_engineer/steps.py', 'status': 'modified', 'Loc': {"(None, 'execute_entrypoint', 218)": {'mod': [221, 225]}, "(None, 'human_review', 374)": {'mod': [377]}}}]} | [] | [] | [] | {
"iss_type": "3",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/ai.py",
"gpt_engineer/main.py",
"gpt_engineer/steps.py",
"gpt_engineer/learning.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | 3e589bf1356024fb471a9d17738e4626f21a953b | https://github.com/AntonOsika/gpt-engineer/issues/1143 | enhancement
good first issue
triage | Add GPTE CLI argument to output system information | When running GPTE, it will be quite helpful to be able to quickly generate useful system information for use in debugging issues.
For example, this should be invoked as `gpte --sysinfo`.
This invocation should output system information in a standardized and useful way, so that users can readily copy and paste the output into GitHub, Discord, etc ...
Here are some requirements for this CLI argument:
* The CLI argument should use system-native commands or those available from the packages installed by GPTE (i.e. it should not require or install additional tools).
* The CLI argument should not expose personally identifiable or other sensitive information.
* When running `gpte --sysinfo` the application immediately outputs the system information without executing any of the other application flow and returns the user back to the command line.
* When running gpte --sysinfo the application does not require an OpenAI (or any other LLM) API key but, rather, immediately generates the system information and outputs it.
Here are some examples of system information that should be returned by running `gpte --sysinfo`:
Outputs of Linux operating system commands like:
* `uname -a`
* `lsb_release -a`
* `cat /proc/version`
and, in Windows:
* `systeminfo`
We should also include Python-specific information, like the output of:
* `pip freeze`
* `python --version`
* `which python`
These are indicative but not comprehensive.
This is a great first issue for a new contributor! | null | https://github.com/AntonOsika/gpt-engineer/pull/1169 | null | {'base_commit': '3e589bf1356024fb471a9d17738e4626f21a953b', 'files': [{'path': 'gpt_engineer/applications/cli/main.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [28, 30, 239]}, "(None, 'main', 250)": {'add': [331, 371, 382]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/applications/cli/main.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | e7e329211655d08e48d04ce828f929c9108050ad | https://github.com/AntonOsika/gpt-engineer/issues/14 | exporting the api key to the environment doesn't work for me | I can't get the export command to work, so an alternative solution like using an extern file or hardcoding the api in the code would be a nice solution. I personally created an external json config file and parsed the api key from that to the python script.
So a solution could be:
1) Make a json file named "config.json"
2) Inside of ai.py add:
```
import json
def get_api_key(file_name: str) -> str:
with open(file_name, 'r') as f:
config = json.load(f)
return config['openai_api_key']
```
3) Inside of config.json add:
```
{
"openai_api_key": "your_api_key"
}
```
4) In the __init__ part of the AI class add:
```
class AI:
def __init__(self, **kwargs):
openai.api_key = get_api_key("config.json")
self.kwargs = kwargs
``` | null | https://github.com/AntonOsika/gpt-engineer/pull/22 | null | {'base_commit': 'e7e329211655d08e48d04ce828f929c9108050ad', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22]}}}, {'path': 'ai.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 3]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"ai.py"
],
"doc": [
"README.md"
],
"test": [],
"config": [
".gitignore"
],
"asset": []
} | 1 | |
AntonOsika | gpt-engineer | 164730a5b933ec0ebc9003c72f60e58176ef0dc6 | https://github.com/AntonOsika/gpt-engineer/issues/819 | enhancement | Automatic benchmarking of gpt-engineer with APPS | ## Feature description
gpt-engineer has an automatic evals suite in "evals/eval_new_code.py". However, only 2 test cases are given in evals/new_code_eval.yaml . An alternative to filling in more testcases manually, we should parse in prompts and tests from the (very large) APPS dataset (https://paperswithcode.com/dataset/apps).
Since APPS is way too large to run in its entirety, there should be functionality to run n randomly selected tests and run n tests according to some predetermined test ordering (so that consecutive benchmark runs are comparable).
The APPS database should not be added to the gpt-engineer git repo! Probably the best way to handle this is to pull it from huggingface (https://huggingface.co/datasets/codeparrot/apps) in the code itself (potentially caching it and gitignoring it so it doesn't need to be pulled on every run).
## Motivation/Application
Automatic benchmarking is the ideal way to determine whether an imposed change to the code base is advantageous. | null | https://github.com/AntonOsika/gpt-engineer/pull/1051 | null | {'base_commit': '164730a5b933ec0ebc9003c72f60e58176ef0dc6', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [90]}}}, {'path': 'gpt_engineer/benchmark/benchmarks/load.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12, 19]}}}, {'path': 'gpt_engineer/benchmark/run.py', 'status': 'modified', 'Loc': {"(None, 'run', 24)": {'add': [50], 'mod': [52]}, "(None, 'print_results', 87)": {'add': [107], 'mod': [109, 121, 123, 124, 125, 126, 127, 128, 129, 130]}}}, {'path': 'gpt_engineer/benchmark/types.py', 'status': 'modified', 'Loc': {"('TaskResult', None, 74)": {'add': [77]}}}, {'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [722, 789, 997, 2002, 2375, 2626, 2905, 4185, 4244], 'mod': [1013, 1151, 1156, 1157, 1174, 1179]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/benchmark/types.py",
"gpt_engineer/benchmark/benchmarks/load.py",
"gpt_engineer/benchmark/run.py"
],
"doc": [],
"test": [],
"config": [
".gitignore",
"poetry.lock"
],
"asset": []
} | 1 |
AntonOsika | gpt-engineer | 1ad0892697e8468939a914f12bbf7378a1e045a2 | https://github.com/AntonOsika/gpt-engineer/issues/914 | enhancement | Automatic benchmarking of gpt-engineer with MBPP | ## Feature description
We have a way to easily add benchmarks:
https://www.loom.com/share/206805143fbb4302b5455a5329eaab17?sid=f689608f-8e49-44f7-b55f-4c81e9dc93e6
This issue is about looking into if [mbpp](https://huggingface.co/datasets/mbpp) is a good benchmark to add and then add a simple version of it. | null | https://github.com/AntonOsika/gpt-engineer/pull/1103 | null | {'base_commit': '1ad0892697e8468939a914f12bbf7378a1e045a2', 'files': [{'path': '.gitignore', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [93]}}}, {'path': 'gpt_engineer/benchmark/__main__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [30]}, "(None, 'main', 54)": {'add': [89]}}}, {'path': 'gpt_engineer/benchmark/benchmarks/apps/load.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [26]}}}, {'path': 'gpt_engineer/benchmark/benchmarks/load.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14, 20]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"gpt_engineer/benchmark/benchmarks/load.py",
"gpt_engineer/benchmark/benchmarks/apps/load.py",
"gpt_engineer/benchmark/__main__.py"
],
"doc": [],
"test": [],
"config": [
".gitignore"
],
"asset": []
} | 1 |
lllyasviel | Fooocus | d16a54edd69f82158ae7ffe5669618db33a01ac7 | https://github.com/lllyasviel/Fooocus/issues/2863 | bug | [Bug]: app-1 | sh: 1: /content/entrypoint.sh: not found (docker compose) | ### Checklist
- [ ] The issue has not been resolved by following the [troubleshooting guide](https://github.com/lllyasviel/Fooocus/blob/main/troubleshoot.md)
- [ ] The issue exists on a clean installation of Fooocus
- [ ] The issue exists in the current version of Fooocus
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
```
docker compose build --no-cache
(venv) deb-workshop :: ~/Fooocus-2024 ‹main› » docker-compose up
WARN[0000] /home/username/Fooocus-2024/docker-compose.yml: `version` is obsolete
[+] Running 2/3
✔ Network fooocus-2024_default Created 0.1s
✔ Volume "fooocus-2024_fooocus-data" Created 0.0s
⠋ Container fooocus-2024-app-1 Created 0.1s
Attaching to app-1
app-1 | sh: 1: /content/entrypoint.sh: not found
app-1 exited with code 127
```
### Steps to reproduce the problem
latest main branch
### What should have happened?
it can run
### What browsers do you use to access Fooocus?
_No response_
### Where are you running Fooocus?
None
### What operating system are you using?
_No response_
### Console logs
```Shell
docker compose build --no-cache
(venv) deb-workshop :: ~/Fooocus-2024 ‹main› » docker-compose up
WARN[0000] /home/username/Fooocus-2024/docker-compose.yml: `version` is obsolete
[+] Running 2/3
✔ Network fooocus-2024_default Created 0.1s
✔ Volume "fooocus-2024_fooocus-data" Created 0.0s
⠋ Container fooocus-2024-app-1 Created 0.1s
Attaching to app-1
app-1 | sh: 1: /content/entrypoint.sh: not found
app-1 exited with code 127
```
```
### Additional information
_No response_ | null | https://github.com/lllyasviel/Fooocus/pull/2865 | null | {'base_commit': 'd16a54edd69f82158ae7ffe5669618db33a01ac7', 'files': [{'path': 'entrypoint.sh', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [33], 'mod': [1]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"entrypoint.sh"
]
} | 1 |
lllyasviel | Fooocus | 179bcb2c4e6e6b9574c5a38e28e3c9813ed95bd7 | https://github.com/lllyasviel/Fooocus/issues/1247 | Canvas zoom for the inpainting canvas | Can we get a canvas zoom feature similar to what https://github.com/richrobber2/canvas-zoom provides for A1111?
Fooocus has by far the best inpainting/outpainting backend. It would be nice if the frontend was spruced up a bit too. | null | https://github.com/lllyasviel/Fooocus/pull/1428 | null | {'base_commit': '179bcb2c4e6e6b9574c5a38e28e3c9813ed95bd7', 'files': [{'path': 'css/style.css', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [96]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"css/style.css"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
odoo | odoo | 4213eebe2ebe6b0c81580176b263aeee9fa6a3fd | https://github.com/odoo/odoo/issues/304 | Bug 1089229: Wrong treatment of UoS among objects | **Impacted versions:**
6.1 and above
**Steps to reproduce:**
See https://bugs.launchpad.net/openobject-addons/+bug/1089229
**Current behavior:**
- If you change units of sale (uos) quantity in sales order, uom quantity is not recalculated, thus breaking the relation between uom and uos (uos_coeff).
- If you change the uom or uos within their category in sales order or invoice, nothing happens --> Thus breaking again the relation between uom and uos (there is no recalculation, and it's not the same selling grams and kilograms).
- Sale order lines shows only uom quantities and uom prices.
**Expected behavior:**
- If you change units of sale (uos) quantity in sales order, uom quantity should be recalculated accordingly (as happens viceversa).
- If you change uom or uos within their category in sales order or invoice, the other quantity is recalculated. Also, price should be recalculated (because of the change of unit, price(kg)=1000*price(g); and also because if quantity changes, another pricelist may apply).
- If using a secondary uos, sale order lines should show both uom and uos, as well as price_unit(uom) and price_unit(uos). --> This is a much desired feature for salespeople, because many times they know the Sale unit and its price (not the uom and price(uom), which may be more related to warehouse in such cases).
-Both UoM and UoS related info (quantities, prices) should be both available in product, sale and invoice objects.
**Further info**
This bug (and its code implications) https://bugs.launchpad.net/openobject-addons/+bug/1089229 is still there in master as of today (checked code a few minutes ago, and in runbot there is something weird with the reports so I cannot obtain the sale order report, but the invoice report shows sale unit price in tax column).
Maybe this is just the right time (just before v8) to harmonize uom/uos and price_unit_uom/price_unit_uos among different objcts (product.product, sale.order, account.invoice) and be able to keep all info lossless.
Also to fix the uom category conversions (look at the 'FIXME' in code, for example: https://github.com/odoo/odoo/blob/master/addons/sale/sale.py#L1031 )
| null | https://github.com/odoo/odoo/pull/7311 | null | {'base_commit': '4213eebe2ebe6b0c81580176b263aeee9fa6a3fd', 'files': [{'path': 'addons/sale_stock/sale_stock_view.xml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [49, 50, 51, 52, 53]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [],
"asset": [
"addons/sale_stock/sale_stock_view.xml"
]
} | 1 | |
binary-husky | gpt_academic | 2a003e8d494bdfb3132dd40dc8d7face7e52be49 | https://github.com/binary-husky/gpt_academic/issues/1697 | ToDo | [Feature]: 接入"gpt-4-turbo-2024-04-09"模型 | ### Class | 类型
程序主体
### Feature Request | 功能请求
能不能接入gpt-4-turbo-2024-04-09和gpt-4-0125-preview这两个模型。 | null | https://github.com/binary-husky/gpt_academic/pull/1698 | null | {'base_commit': '2a003e8d494bdfb3132dd40dc8d7face7e52be49', 'files': [{'path': 'config.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [35]}}}, {'path': 'request_llms/bridge_all.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [202]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"request_llms/bridge_all.py",
"config.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | 8d7ca46b2c1fcf0fe8983b0d6effc5fd9d009bff | https://github.com/deepfakes/faceswap/issues/32 | ImportError: No module named pathlib | I have already installed pathlib in python3.6:Requirement already satisfied: pathlib in /usr/local/lib/python3.6/dist-packages
Command executed: python3 faceswap.py extract -i ~/faceswap/photo/trump -o ~/faceswap/data/trump
Traceback (most recent call last):
File "faceswap.py", line 3, in <module>
from scripts.extract import ExtractTrainingData
File "/home/ubuntu/data/faceswap/scripts/extract.py", line 2, in <module>
from lib.cli import DirectoryProcessor
File "/home/ubuntu/data/faceswap/lib/cli.py", line 6, in <module>
from lib.utils import get_image_paths, get_folder, load_images, stack_images
File "/home/ubuntu/data/faceswap/lib/utils.py", line 4, in <module>
from pathlib import Path
ImportError: No module named pathlib
Can anyone help me out with this issue?
| null | https://github.com/deepfakes/faceswap/pull/33 | null | {'base_commit': '8d7ca46b2c1fcf0fe8983b0d6effc5fd9d009bff', 'files': [{'path': 'USAGE.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [39, 41, 55]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"USAGE.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | 47b43191031d0901371d0be362fcccdf547cb4e5 | https://github.com/deepfakes/faceswap/issues/306 | enhancement | Is it possible to implement occlusion masks to original model? | I think GAN model's most interesting feature is occlusion masks. But original model is more stable than GAN and the output of GAN code here is not good. So my question is can we implement this occlusion mask feature to original model? Or is it exclusive to GAN? | null | https://github.com/deepfakes/faceswap/pull/576 | null | {'base_commit': '47b43191031d0901371d0be362fcccdf547cb4e5', 'files': [{'path': '.github/ISSUE_TEMPLATE.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3]}}}, {'path': '.github/ISSUE_TEMPLATE/bug_report.md', 'status': 'removed', 'Loc': {}}, {'path': '.github/ISSUE_TEMPLATE/feature_request.md', 'status': 'removed', 'Loc': {}}, {'path': '.install/windows/MultiDetailPrint.nsi', 'status': 'removed', 'Loc': {}}, {'path': '.install/windows/git_install.inf', 'status': 'removed', 'Loc': {}}, {'path': 'lib/aligner.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [14]}, "('Extract', 'extract', 19)": {'mod': [22, 24]}, "('Extract', 'transform', 37)": {'mod': [41, 43]}, "(None, 'get_matrix_scaling', 126)": {'mod': [126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136]}, "(None, 'get_align_mat', 139)": {'mod': [142]}}}, {'path': 'lib/alignments.py', 'status': 'modified', 'Loc': {"('Alignments', None, 17)": {'add': [272], 'mod': [295]}, "('Alignments', 'rotate_existing_landmarks', 295)": {'add': [308], 'mod': [299, 302]}, "('Alignments', 'hashes_to_frame', 63)": {'mod': [65, 66, 67, 68, 69, 70, 71]}}}, {'path': 'lib/config.py', 'status': 'modified', 'Loc': {"('FaceswapConfig', 'get', 78)": {'mod': [91, 92]}, "('FaceswapConfig', 'get_config_file', 96)": {'mod': [99, 100]}, "('FaceswapConfig', 'check_config_choices', 282)": {'mod': [290, 291, 292]}}}, {'path': 'lib/gui/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 4, 6, 8]}}}, {'path': 'lib/gui/display_page.py', 'status': 'modified', 'Loc': {"('DisplayPage', '__init__', 17)": {'add': [22], 'mod': [37]}, '(None, None, None)': {'mod': [9]}, "('DisplayOptionalPage', 'add_option_save', 201)": {'mod': [205]}}}, {'path': 'lib/gui/options.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9], 'mod': [2, 11]}, "('CliOptions', 'gen_cli_arguments', 228)": {'add': [249], 'mod': [235]}}}, {'path': 'lib/keypress.py', 'status': 'modified', 'Loc': {}}, {'path': 'lib/logger.py', 'status': 'modified', 'Loc': {"(None, 'log_setup', 77)": {'mod': [77, 85]}, "(None, 'file_handler', 95)": {'mod': [95, 97, 98, 99, 100, 101, 102]}}}, {'path': 'lib/model/initializers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [6, 7, 9, 10]}, "(None, 'icnr_keras', 13)": {'mod': [19]}, "('ICNR', None, 33)": {'mod': [33, 34, 35, 37, 38, 40, 41, 42, 43, 44, 45, 46, 47, 49, 50, 51, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 70, 71, 72, 73, 74, 75, 78, 79, 80, 81]}}}, {'path': 'lib/model/normalization.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [161], 'mod': [6, 7, 286, 287, 288, 289]}}}, {'path': 'lib/queue_manager.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [9]}, "('QueueManager', '__init__', 22)": {'mod': [35, 36, 37]}}}, {'path': 'lib/sysinfo.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}, "('SysInfo', None, 15)": {'mod': [35, 36, 37, 38, 232, 251]}, "('SysInfo', 'is_virtual_env', 61)": {'mod': [63, 64, 65, 66, 67, 68, 69]}, "('SysInfo', 'cudnn_version', 166)": {'mod': [169, 170, 171, 172, 175, 176, 177, 178, 194, 195, 196, 197]}, "('SysInfo', 'cuda_version_linux', 232)": {'mod': [244, 245, 246, 247]}, "('SysInfo', 'cuda_version_windows', 251)": {'mod': [257, 258, 259, 260]}, "('SysInfo', 'full_info', 264)": {'mod': [278]}}}, {'path': 'lib/umeyama.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [15, 16, 17, 18, 19, 20, 21, 22, 23, 25, 26, 27, 28, 29, 30, 31, 32, 33, 35]}, "(None, 'umeyama', 35)": {'mod': [55, 56]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [114], 'mod': [18]}, "(None, 'safe_shutdown', 206)": {'add': [217]}, "(None, 'backup_file', 82)": {'mod': [82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 95]}, "(None, 'set_system_verbosity', 95)": {'mod': [106, 107, 111]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/gui/options.py",
"lib/gui/display_page.py",
"lib/alignments.py",
"lib/sysinfo.py",
"lib/config.py",
"lib/logger.py",
"lib/keypress.py",
"lib/utils.py",
"lib/model/normalization.py",
"lib/umeyama.py",
"lib/model/initializers.py",
"lib/aligner.py",
"lib/queue_manager.py",
"lib/gui/__init__.py"
],
"doc": [
".github/ISSUE_TEMPLATE/feature_request.md",
".github/ISSUE_TEMPLATE/bug_report.md",
".github/ISSUE_TEMPLATE.md"
],
"test": [],
"config": [
".install/windows/MultiDetailPrint.nsi",
".install/windows/git_install.inf"
],
"asset": []
} | 1 |
deepfakes | faceswap | 3f04e8cd06e1816e6aa87f3826ebb919cfa983b2 | https://github.com/deepfakes/faceswap/issues/279 | Sharpening the face before applying it | Sharpen by multiplying every pixel by 2, and then subtracting the average value of the neighborhood from it.
I modified Convert_Masked.py and I find the face less blurry on closeups on hi-res pics, though it's a bit too sharp on normal/low res compared to the rest of the image.
YMMV.
```
def apply_new_face(self, image, new_face, image_mask, mat, image_size, size):
base_image = numpy.copy( image )
new_image = numpy.copy( image )
cv2.warpAffine( new_face, mat, image_size, new_image, cv2.WARP_INVERSE_MAP | cv2.INTER_CUBIC, cv2.BORDER_TRANSPARENT )
kernel = numpy.zeros( (9,9), numpy.float32)
kernel[4,4] = 2.0
boxFilter = numpy.ones( (9,9), numpy.float32) / 81.0
kernel = kernel - boxFilter
new_image = cv2.filter2D(new_image, -1, kernel)
``` | null | https://github.com/deepfakes/faceswap/pull/285 | null | {'base_commit': '3f04e8cd06e1816e6aa87f3826ebb919cfa983b2', 'files': [{'path': 'plugins/Convert_Masked.py', 'status': 'modified', 'Loc': {"('Convert', '__init__', 9)": {'add': [20]}, "('Convert', None, 8)": {'mod': [9]}, "('Convert', 'apply_new_face', 36)": {'mod': [42]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {"('ConvertImage', 'add_optional_arguments', 24)": {'add': [131]}, "('ConvertImage', 'process', 152)": {'add': [179]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"plugins/Convert_Masked.py",
"scripts/convert.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | a561f5b78bf09e785686b500c4825641b0823791 | https://github.com/deepfakes/faceswap/issues/628 | Increase training_data generation speed | For some settings the training_data generation takes a long time especially "warp to landmarks" is pretty slow.
IMO using multiprocessing would speed stuff a lot.
But there is also some stuff that could be cached, like `get_closest_match` (used in warp to landmarks).
I did some quick and dirty profiling.
See https://github.com/kilroythethird/faceswap/tree/perf_test
<details>
<summary>Profiling with current staging</summary>
<p>
```
# python faceswap.py train -A faces/a -B faces/b -m model -t original -s 400 -bs 32 -it 201 -g 1 -ps 250 -wl -L INFO
Thread: load_batches_0
ALL PER CALL COUNT
1201.998871 0.187578 6408 process_face(side=a)
691.611971 0.107929 6408 random_warp_landmarks
367.470097 0.057337 6409 get_closest_match
55.490737 0.008658 6409 mask_function
32.234393 0.005030 6409 random_transform
23.848031 0.003721 6409 cv2.imread
7.767768 0.001212 6409 get_landmarks
7.564588 0.001180 6409 sha1(image)
0.588165 0.000092 6409 do_random_flip
Thread: load_batches_0
ALL PER CALL COUNT
1161.078251 0.175178 6628 process_face(side=b)
730.652074 0.110237 6628 random_warp_landmarks
282.749230 0.042660 6628 get_closest_match
58.361239 0.008804 6629 mask_function
33.519484 0.005056 6629 random_transform
22.260553 0.003358 6629 cv2.imread
7.995775 0.001206 6629 get_landmarks
7.795409 0.001176 6629 sha1(image)
0.688545 0.000104 6629 do_random_flip
Thread: training_0
ALL PER CALL COUNT
1018.465688 5.092328 200 Batcher(a)->train_one_batch
843.560930 4.217805 200 Batcher.get_next
174.890468 0.874452 200 train_on_batch
191.730749 0.958654 200 Batcher(b)->train_one_batch
106.012409 0.530062 200 train_on_batch
85.689574 0.428448 200 Batcher.get_next
```
</p></details>
## Suggesttions
### Multiprocessing
I wrote a "FixedProducerDispatcher" class which runs a work function in x sub processes and uses fixed shared memory to save the data. Each run creates a whole batch.
The only downside to this i see is that we now need to know how big and in which shape the batch is before starting the subprocesses.
See https://github.com/kilroythethird/faceswap/tree/mp_training_data
and https://github.com/kilroythethird/faceswap/tree/perf_test_mp (with profiling output)
This definitely helps performance wise
<details>
<summary>Profiling with multiprocessing</summary>
<p>
```
# python faceswap.py train -A faces/a -B faces/b -m model -t original -s 400 -bs 32 -it 201 -g 1 -ps 250 -wl -L INFO
Thread: load_batches_0
ALL PER CALL COUNT
184.241626 0.028362 6496 process_face(side=a)
132.813554 0.020445 6496 random_warp_landmarks
11.279126 0.001736 6496 get_closest_match
10.877508 0.001674 6496 random_transform
9.162846 0.001411 6496 mask_function
7.910127 0.001218 6496 cv2.imread
3.066403 0.000472 6496 get_landmarks
2.870745 0.000442 6496 sha1(image)
0.097161 0.000015 6496 do_random_flip
Thread: load_batches_0
ALL PER CALL COUNT
181.656559 0.027964 6496 process_face(side=b)
134.033812 0.020633 6496 random_warp_landmarks
11.135122 0.001714 6496 random_transform
8.973757 0.001381 6496 mask_function
7.647873 0.001177 6496 get_closest_match
7.452525 0.001147 6496 cv2.imread
3.014454 0.000464 6496 get_landmarks
2.841576 0.000437 6496 sha1(image)
0.159162 0.000025 6496 do_random_flip
Thread: training_0
ALL PER CALL COUNT
171.892934 0.859465 200 Batcher(a)->train_one_batch
162.831906 0.814160 200 train_on_batch
9.050015 0.045250 200 Batcher.get_next
111.126615 0.555633 200 Batcher(b)->train_one_batch
102.814357 0.514072 200 train_on_batch
8.296469 0.041482 200 Batcher.get_next
```
</p></details>
### Caching
Also some function cached here:
https://github.com/kilroythethird/faceswap/tree/perf_test_caching
and https://github.com/kilroythethird/faceswap/tree/perf_test_all (with multiprocessing and caching)
I am not 100% sure caching + multiprocessing works properly on windows systems (spawn vs fork), if someone could test that that would be awesome.
Function cached:
- `sha1(img).hexdigest()` ie. the hash creation function.
Cached by filename and side.
This doesn't bring that much (in absolute terms), but it also doesn't really harm.
- The major (non random) part of `get_closest_match` ie "warp to landmark".
This caches only the indices of the 10 closest images from the other set for each face (so maximum some kb).
This brings performance up for "warp to landmark" by a good chunk and should def be done in some way or another i think.
- `mask_function` Currently i cache the mask only (256,256,1) but for every image.
So this sums up. Assuming 1000 faces in each set that means 250MB for each side (`(256*256*4*1000)/1024./1024.`).
I am not really sure if this is worth it to be honest.
<details>
<summary>Profiling with caching and without multiprocessing</summary>
<p>
```
Thread: load_batches_0
ALL PER CALL COUNT
889.992215 0.136607 6515 process_face(side=a)
764.990586 0.117420 6515 random_warp_landmarks
32.839521 0.005041 6515 random_transform
23.343189 0.003583 6515 get_closest_match
22.460514 0.003447 6516 cv2.imread
20.894716 0.003207 6516 mask_function
0.799270 0.000123 6516 get_landmarks
0.641308 0.000098 6516 sha1(image)
0.585590 0.000090 6515 do_random_flip
Thread: load_batches_0
ALL PER CALL COUNT
878.511239 0.137182 6404 process_face(side=b)
744.183952 0.116206 6404 random_warp_landmarks
33.463518 0.005225 6405 get_closest_match
31.718939 0.004952 6405 random_transform
22.498738 0.003513 6405 cv2.imread
21.485141 0.003354 6405 mask_function
1.094532 0.000171 6405 get_landmarks
0.942542 0.000147 6405 sha1(image)
0.604323 0.000094 6405 do_random_flip
Thread: training_0
ALL PER CALL COUNT
727.685975 3.638430 200 Batcher(b)->train_one_batch
625.898893 3.129494 200 Batcher.get_next
101.772938 0.508865 200 train_on_batch
195.816539 0.979083 200 Batcher(a)->train_one_batch
169.204756 0.846024 200 train_on_batch
26.595766 0.132979 200 Batcher.get_next
```
</p></details>
Let me know what you think. I could prepare patches for bot, or just subsets.
For me multiprocessing in some form and warp to landmark speed up are the important things.
| null | https://github.com/deepfakes/faceswap/pull/690 | null | {'base_commit': 'a561f5b78bf09e785686b500c4825641b0823791', 'files': [{'path': 'lib/training_data.py', 'status': 'modified', 'Loc': {"('TrainingDataGenerator', '__init__', 23)": {'add': [34]}, "('TrainingDataGenerator', 'load_batches', 84)": {'add': [89]}, '(None, None, None)': {'mod': [7]}, "('TrainingDataGenerator', 'get_closest_match', 186)": {'mod': [190, 191, 192, 193, 194, 195]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/training_data.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | b057b719ce5665590beb3ba1782721bc6257963a | https://github.com/deepfakes/faceswap/issues/1143 | bug | Disabling AMD and CUDA sets backed to "cpu" in config, but running faceswap -h still tries to load CUDA | Turning off all GPU related config items during setup does create config/.faceswap, which contains {"backend": "cpu"}.
However, running faceswap.py -h throws an exception and terminates the program:
Setting Faceswap backend to CPU
Traceback (most recent call last):
File "faceswap.py", line 6, in <module>
from lib.cli import args as cli_args
File "/Users/mrfredsmoothie/software/faceswap/lib/cli/args.py", line 13, in <module>
from lib.gpu_stats import GPUStats
File "/Users/mrfredsmoothie/software/faceswap/lib/gpu_stats.py", line 17, in <module>
import pynvx # pylint: disable=import-error
ModuleNotFoundError: No module named 'pynvx'
Why set the backend to CPU and then choke trying to display options?
**To Reproduce**
Steps to reproduce the behavior:
1. configure a lack of GPU support
2. try to use the -h option to list available options and commands
**Expected behavior**
Don't try to load GPU info if no GPU support is configured
**Desktop (please complete the following information):**
- OS: MacOS X 11.2.3 (M1 arm64)
- Python Version Python 3.8.8
- Conda Version 4.10.0
- Commit ID f60eaee | null | https://github.com/deepfakes/faceswap/pull/1216 | null | {'base_commit': 'b057b719ce5665590beb3ba1782721bc6257963a', 'files': [{'path': 'INSTALL.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22, 147], 'mod': [57]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22, 40]}}}, {'path': 'lib/gpu_stats/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {"('_Backend', '__init__', 32)": {'mod': [33]}, "('_Backend', '_configure_backend', 85)": {'mod': [95, 96]}, "(None, 'set_backend', 122)": {'mod': [127]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {"('Environment', 'set_config', 284)": {'add': [289]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/gpu_stats/__init__.py",
"setup.py",
"lib/utils.py"
],
"doc": [
"README.md",
"INSTALL.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | b057b719ce5665590beb3ba1782721bc6257963a | https://github.com/deepfakes/faceswap/issues/1197 | Please Support for Apple M1 pro/max | As we know, Apple release New silicon named M1 pro/max.
It has powerful GPUs and CPUs.
Is there any chance to run FaceSwap on new Mac book pro?
| null | https://github.com/deepfakes/faceswap/pull/1216 | null | {'base_commit': 'b057b719ce5665590beb3ba1782721bc6257963a', 'files': [{'path': 'INSTALL.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22, 147], 'mod': [57]}}}, {'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [22, 40]}}}, {'path': 'lib/gpu_stats/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {"('_Backend', '__init__', 32)": {'mod': [33]}, "('_Backend', '_configure_backend', 85)": {'mod': [95, 96]}, "(None, 'set_backend', 122)": {'mod': [127]}}}, {'path': 'setup.py', 'status': 'modified', 'Loc': {"('Environment', 'set_config', 284)": {'add': [289]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/gpu_stats/__init__.py",
"setup.py",
"lib/utils.py"
],
"doc": [
"README.md",
"INSTALL.md"
],
"test": [],
"config": [],
"asset": []
} | 1 | |
deepfakes | faceswap | 85c5e8b66c00b096c31f416cc4954d611c3fdb14 | https://github.com/deepfakes/faceswap/issues/39 | bug
good first issue
dev
performance | Don't reload models everytime `convert_one_image` is called | ## Expected behavior
Use the convert command to convert a directory. `convert_one_image` loads the model once.
## Actual behavior
Use the convert command to convert a directory. `convert_one_image` loads the model every time that it is called.
| null | https://github.com/deepfakes/faceswap/pull/52 | null | {'base_commit': '85c5e8b66c00b096c31f416cc4954d611c3fdb14', 'files': [{'path': 'faceswap.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [2, 8, 17, 18, 19, 20]}}}, {'path': 'lib/DetectedFace.py', 'status': 'removed', 'Loc': {}}, {'path': 'lib/aligner.py', 'status': 'modified', 'Loc': {"(None, 'get_align_mat', 25)": {'mod': [26]}}}, {'path': 'lib/cli.py', 'status': 'modified', 'Loc': {"('DirectoryProcessor', 'process_arguments', 34)": {'add': [47], 'mod': [49, 51]}, '(None, None, None)': {'mod': [5]}, "('DirectoryProcessor', 'process_directory', 51)": {'mod': [56, 59]}, "('DirectoryProcessor', None, 14)": {'mod': [62]}}}, {'path': 'lib/faces_detect.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [3, 4, 28]}, "(None, 'detect_faces', 6)": {'mod': [9, 11, 12, 13, 14, 15, 16]}}}, {'path': 'lib/model.py', 'status': 'removed', 'Loc': {}}, {'path': 'lib/training_data.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [45]}, "(None, 'get_training_data', 13)": {'mod': [13, 14, 15, 16, 17, 18, 20, 21, 22, 23, 24, 26, 27, 29]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [12], 'mod': [1, 2]}, "(None, 'get_folder', 8)": {'mod': [10]}, "(None, 'load_images', 18)": {'mod': [18, 19, 20, 21, 22, 23, 24, 25, 26]}}}, {'path': 'plugins/Convert_Adjust.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('Convert', None, 5)": {'mod': [6, 7]}, "('Convert', 'patch_image', 12)": {'mod': [21]}}}, {'path': 'plugins/Convert_Masked.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [6]}, "('Convert', None, 8)": {'mod': [9, 10]}, "('Convert', 'get_new_face', 51)": {'mod': [54]}, "('Convert', 'get_image_mask', 58)": {'mod': [67]}}}, {'path': 'plugins/Extract_Align.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('Extract', 'extract', 6)": {'add': [7]}}}, {'path': 'plugins/Extract_Crop.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}}}, {'path': 'plugins/PluginLoader.py', 'status': 'modified', 'Loc': {"('PluginLoader', None, 2)": {'add': [12]}}}, {'path': 'scripts/convert.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [7, 8, 9]}, "('ConvertImage', None, 13)": {'mod': [38, 39, 40, 42, 43, 44, 45]}, "('ConvertImage', 'process_image', 38)": {'mod': [47, 49, 50, 51, 52, 53, 54, 56, 57, 59, 60]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [5]}, "('ExtractTrainingData', None, 8)": {'mod': [18, 19]}, "('ExtractTrainingData', 'process_image', 18)": {'mod': [22, 23, 24, 25, 26, 28, 29, 30, 31]}}}, {'path': 'scripts/train.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10], 'mod': [5, 6, 8, 9]}, "('TrainingProcessor', 'process_arguments', 18)": {'mod': [24, 25, 26, 27, 28, 29, 30]}, "('TrainingProcessor', None, 12)": {'mod': [89, 90, 91, 92, 93, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 107, 108, 109, 111, 113, 114, 115, 116]}, "('TrainingProcessor', 'process', 118)": {'mod': [119, 122, 123, 125, 127, 129, 131, 132, 133, 134, 135, 136, 138, 139, 140, 142, 143, 144, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/aligner.py",
"lib/model.py",
"lib/training_data.py",
"plugins/Convert_Adjust.py",
"plugins/Extract_Align.py",
"plugins/Extract_Crop.py",
"scripts/train.py",
"faceswap.py",
"plugins/PluginLoader.py",
"plugins/Convert_Masked.py",
"lib/DetectedFace.py",
"lib/faces_detect.py",
"lib/utils.py",
"lib/cli.py",
"scripts/convert.py",
"scripts/extract.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
deepfakes | faceswap | 9438672b1cf80602fc93536670d9601d655377f5 | https://github.com/deepfakes/faceswap/issues/213 | code to integrate | check for duplicates in extract folder | Hello all,
I have been having trouble with cloud servers shutting down unexpectedly so I edited the original `extract.py` to not overwrite if the image has already been processed in a previous run.
Note that I am currently assuming an `idx` of `0` (i.e. single face was found in photo, usually denoting successful face extraction - all extracted images with nonzero index have been failures from what I ve seen, please enlighten me further!)
This can be handy since somebody may update his image db but should not wait for complete re-extraction!
Note that this is on an earlier version I pulled from this repo so not directly applicable, but I am sure this can be implemented extremely quickly.
Just thought I'd share this idea: you can have a `-no` flag in the extract command to prevent overwriting.
Thoughts? Thanks to all contributors for the good work!
``` python
import os
def process(self):
extractor_name = "Align" # TODO Pass as argument
extractor = PluginLoader.get_extractor(extractor_name)()
try:
for filename in self.read_directory():
output_file = self.output_dir / Path(filename).stem
output_file_to_check = os.path.abspath(str(output_file) +
'0' +
Path(filename).suffix)
if os.path.isfile(output_file_to_check):
print('File {} already exists, will not overwrite'.format(output_file_to_check))
else:
image = cv2.imread(filename)
for idx, face in self.get_faces(image):
resized_image = extractor.extract(image, face, 256)
cv2.imwrite(str(output_file) + str(idx) + Path(filename).suffix, resized_image)
except Exception as e:
print('Failed to extract from image: {}. Reason: {}'.format(filename, e))
``` | null | https://github.com/deepfakes/faceswap/pull/214 | null | {'base_commit': '9438672b1cf80602fc93536670d9601d655377f5', 'files': [{'path': 'lib/cli.py', 'status': 'modified', 'Loc': {"('DirectoryProcessor', 'process_arguments', 39)": {'add': [53], 'mod': [56]}, "('DirectoryProcessor', 'write_alignments', 80)": {'add': [84]}, "('DirectoryProcessor', 'get_faces_alignments', 105)": {'mod': [119]}, "('DirectoryProcessor', 'get_faces', 122)": {'mod': [136]}}}, {'path': 'lib/utils.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'get_image_paths', 14)": {'mod': [14, 15, 16]}}}, {'path': 'scripts/extract.py', 'status': 'modified', 'Loc': {"('ExtractTrainingData', 'add_optional_arguments', 22)": {'add': [40]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"lib/cli.py",
"lib/utils.py",
"scripts/extract.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | b4ce0b910cd7265d046923162c922be840fa60c8 | https://github.com/3b1b/manim/issues/1677 | bug | Questionable indexing of Tex | ### Describe the bug
When I made videos, I used many math equations with specific variables or sub-expressions colored, and noticed there were some bugs in manim dealing with indexing components in `Tex` mobjects. Recently I'm trying to refactor the code of `Tex` class and fix bugs concerning with coloring, so I dive into the source code of `Tex`, only to find that manim just breaks the original tex string with substrings to build new `SingleStringTex` objects, using the lengths of which to do the indexing work. As the formula becomes much more complicated, some issues cannot be handled through the `modify_special_strings` method.
1. Symbols given by the same command, like `\sqrt`, may have different shapes or even different numbers of components making up them.
2. The order between symbols may be swapped from the original tex string such as `\frac`, `\sum`, and super- and subscripts.
When compiling a tex string, each specified pieces of it should be tracked so that the indices of their corresponding components can be found. This, however, may require us to dive right into the nature of how TeX works... I want to look for external tools to finish the "tracking" work. I'm not even sure whether there're approaches to fixing this issue perfectly...
**Code**:
This is just a combination of all messy stuff, so don't care about its actual meaning...
```python
from manimlib import *
TEST_STR = """\\lim_{n \\to \\infty} \\left\\lfloor
\\sqrt{\\frac{1}{n !} \\mathrm{e}^{n} a_{n} + b_{n}^{p}} \\otimes
\\sqrt[n]{\\sum_{m = 0}^{n^{2}} \\tilde{c}_{m \\cdot n}^{b_{n}^{p}
\\cos \\left( \\theta \\right)}} \\right\\rfloor""".replace("\n", " ")
class TestScene(Scene):
def construct(self):
tex1 = Tex(
TEST_STR,
fill_color=TEAL
)
tex1.shift(2 * UP)
tex2 = Tex(
TEST_STR,
tex_to_color_map={"b": LIGHT_PINK, "\\sum": YELLOW},
fill_color=TEAL
)
tex2.shift(2 * DOWN)
sub_tex = VGroup(*[
Tex(s, fill_color=BLUE)
for s in re.split(r"(b|\\sum)", TEST_STR)
]).scale(0.8).arrange(RIGHT, buff=0.7)
self.add(tex1, tex2, sub_tex)
# Labels of indices for debugging
self.add(
# index_labels(tex1[0]),
# *[index_labels(submob) for submob in tex2],
# *[index_labels(submob[0]) for submob in sub_tex]
)
```
**Wrong display or Error traceback**:


| null | https://github.com/3b1b/manim/pull/1678 | null | {'base_commit': 'b4ce0b910cd7265d046923162c922be840fa60c8', 'files': [{'path': 'manimlib/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [39]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"manimlib/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
3b1b | manim | 05db6174e9d677fe26eb863592d88e5cf02cf8cb | https://github.com/3b1b/manim/issues/28 | Windows 10 - No module named . (period) | I've tried the python extract_scene.py -p example_scenes.py SquareToCircle example on cmd, and I get the above error.
*I've looked around and it seems that a few a people have had this problem, but I can't find any one who has a solution. .(period) is syntax for relative import, but I don't know how to fix from there. | null | https://github.com/3b1b/manim/pull/38 | null | {'base_commit': '05db6174e9d677fe26eb863592d88e5cf02cf8cb', 'files': [{'path': 'extract_scene.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 161]}, "(None, 'get_module', 154)": {'mod': [154, 156, 158]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"extract_scene.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
3b1b | manim | 43d28a8595450d39f800f650c25a7570b228db44 | https://github.com/3b1b/manim/issues/627 | Text rendering problem | ### Steps to reproduce
```
from manimlib.imports import *
class Playground(Scene):
def construct(self):
text = TextMobject("print('Hello, world!')",
tex_to_color_map={'print': YELLOW})
self.play(FadeIn(text))
```
### The unexpected behavior that occurred
Notice between the 'print' and the '(' there is no space.
Wrong

Right

### Solution
I changed [here](https://github.com/3b1b/manim/blob/master/manimlib/mobject/svg/tex_mobject.py) in line 134
`"arg_separator": " ", --> "arg_separator": "",`
and also commented out [here](https://github.com/3b1b/manim/blob/master/manimlib/mobject/svg/tex_mobject.py) in line 160
```
split_list = [str(x).strip() for x in split_list] -->
#split_list = [str(x).strip() for x in split_list]
```
I also find some interesting in TexMobject(), no matter how many space I enter, like
`text = TexMobject("print ('Hello, world!')")`
it always ignored those space. | null | https://github.com/3b1b/manim/pull/628 | null | {'base_commit': '43d28a8595450d39f800f650c25a7570b228db44', 'files': [{'path': 'manimlib/mobject/svg/tex_mobject.py', 'status': 'modified', 'Loc': {"('TextMobject', None, 241)": {'add': [244]}, "('TexMobject', 'break_up_tex_strings', 152)": {'mod': [160]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "3",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"manimlib/mobject/svg/tex_mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
3b1b | manim | 3362f93964cae6f610a47d2da0e076b51a9eab42 | https://github.com/3b1b/manim/issues/1017 | Text(" ") don't move. Because of that Text("a b") shows wrong transform animation. | ```python
class test(Scene):
def construct(self):
text = Text(" ")
text.to_corner(DOWN+LEFT)
rect = SurroundingRectangle(text)
self.add(text,rect)
```
## Output

```python
class test(Scene):
def construct(self):
text = Text("a b")
text1 = Text("123")
text.to_corner(DOWN+LEFT)
text1.to_edge(RIGHT+DOWN)
rect = SurroundingRectangle(text)
self.add(text,rect)
self.play(Transform(text,text1))
self.wait()
```
## Output

| null | https://github.com/3b1b/manim/pull/1035 | null | {'base_commit': '3362f93964cae6f610a47d2da0e076b51a9eab42', 'files': [{'path': 'manimlib/mobject/svg/svg_mobject.py', 'status': 'modified', 'Loc': {"('SVGMobject', 'get_mobjects_from', 76)": {'mod': [90, 91, 92]}}}, {'path': 'manimlib/mobject/svg/text_mobject.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7]}, "('Text', None, 25)": {'add': [46]}, "('Text', '__init__', 49)": {'add': [52, 57], 'mod': [50, 81]}, "('Text', 'remove_last_M', 83)": {'mod': [86]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"manimlib/mobject/svg/svg_mobject.py",
"manimlib/mobject/svg/text_mobject.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
3b1b | manim | 994749ceadf9f87f2ebe40bbb795fbb2b696f377 | https://github.com/3b1b/manim/issues/39 | Python version problem? | While running the demo, ( python extract_scene.py -p example_scenes.py SquareToCirclepython extract_scene.py -p example_scenes.py SquareToCircle ) I get the following exception:
File "extract_scene.py", line 46
print str(err)
^
SyntaxError: invalid syntax
I believe it is somehow related to python version, right? | null | https://github.com/3b1b/manim/pull/97 | null | {'base_commit': '994749ceadf9f87f2ebe40bbb795fbb2b696f377', 'files': [{'path': 'active_projects/WindingNumber.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "(None, 'point_to_rev', 381)": {'add': [384], 'mod': [381, 386]}, "('TestDual', 'construct', 86)": {'mod': [88]}, "(None, 'split_interval', 414)": {'mod': [414]}, "('RectangleData', 'splits_on_dim', 446)": {'mod': [456]}, "('RectangleData', 'split_line_on_dim', 460)": {'mod': [469]}, "(None, 'plane_poly_with_roots', 476)": {'mod': [477]}, "(None, 'plane_func_from_complex_func', 481)": {'mod': [482]}, "(None, 'point_func_from_complex_func', 484)": {'mod': [485]}, "('LoopSplitSceneMapped', 'setup', 954)": {'mod': [957]}}}, {'path': 'active_projects/basel.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2]}, "(None, 'show_line_length', 53)": {'mod': [55]}}}, {'path': 'active_projects/fourier.py', 'status': 'modified', 'Loc': {"('AddingPureFrequencies', 'play_mix', 276)": {'mod': [283]}, "('AddingPureFrequencies', 'separate_out_parts', 288)": {'mod': [314]}, "('WrapCosineGraphAroundCircle', 'show_initial_signal', 1073)": {'mod': [1082]}, "('ShowLowerFrequency', 'show_lower_frequency_signal', 1663)": {'mod': [1678]}, "('ShowLinearity', 'show_sum_of_signals', 1820)": {'mod': [1830]}, "('ShowCommutativeDiagram', 'apply_transform', 2077)": {'mod': [2084]}, "('FilterOutHighPitch', 'show_intensity_vs_time_graph', 2239)": {'mod': [2272]}, "('FilterOutHighPitch', 'get_broadcast_anims', 2412)": {'mod': [2421]}, "('WriteComplexExponentialExpression', 'show_eulers_formula', 2703)": {'mod': [2752]}, "('ScaleUpCenterOfMass', 'scale_up_center_of_mass', 3236)": {'mod': [3279, 3364]}, "('SummarizeFormula', 'construct', 3739)": {'mod': [3749]}, "('BoundsAtInfinity', 'construct', 3790)": {'mod': [3807]}, "('BoundsAtInfinity', 'get_time_interval', 3889)": {'mod': [3892]}, "('ShowUncertaintyPrinciple', 'construct', 3921)": {'mod': [3972]}}}, {'path': 'animation/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [7]}}}, {'path': 'animation/continual_animation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [3]}}}, {'path': 'animation/playground.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 9], 'mod': [4, 5]}, "('Vibrate', 'update_mobject', 37)": {'mod': [45]}}}, {'path': 'animation/simple_animations.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [9, 10, 11]}}}, {'path': 'animation/transform.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [9]}, "('ApplyMethod', '__init__', 133)": {'mod': [145, 153, 154, 155]}}}, {'path': 'camera/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}}}, {'path': 'camera/camera.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [10]}, "('Camera', 'display_image_mobject', 245)": {'mod': [296]}, "('Camera', 'overlay_rgba_array', 312)": {'mod': [315]}}}, {'path': 'eop/bayes.py', 'status': 'modified', 'Loc': {"('UpdatePokerPrior', 'get_prior_labels', 1038)": {'mod': [1046]}, "('MusicExample', 'record_track', 1661)": {'mod': [1665]}}}, {'path': 'eop/bayes_footnote.py', 'status': 'modified', 'Loc': {"('TryUnitSquareVisual', 'add_prior_division', 509)": {'mod': [517]}, "('ShowRestrictedSpace', 'fade_out_negative_result_individuals', 685)": {'mod': [703]}, "('CompareNumbersInBothExamples', 'construct', 1370)": {'mod': [1385, 1393]}}}, {'path': 'eop/combinations.py', 'status': 'modified', 'Loc': {"('ExperienceProblemSolver', 'think_about_patterns', 175)": {'mod': [209]}, "('IntroducePascalsTriangle', 'show_triangle', 1801)": {'mod': [1810]}, "('StacksApproachBellCurve', 'construct', 2059)": {'mod': [2149]}, "('ChooseThreeFromFive', 'that_phrase_is_confusing', 2380)": {'mod': [2441]}, "('ChooseThreeFromFive', 'get_names', 2488)": {'mod': [2491]}, "('StudentsGetConfused', 'create_pi_creatures', 2699)": {'mod': [2702]}}}, {'path': 'eop/independence.py', 'status': 'modified', 'Loc': {"('MeaningOfIndependence', 'align_conditionals', 229)": {'mod': [236]}, "('ThousandPossibleQuizzes', 'ask_about_second_question', 948)": {'mod': [956]}, "('ShowAllEightConditionals', 'show_all_conditionals', 1505)": {'mod': [1516]}, "('NameBinomial', 'add_quiz_questions', 2311)": {'mod': [2336]}, "('CycleThroughPatterns', 'construct', 2527)": {'mod': [2560]}, "('CorrectForDependence', 'get_arrow_flip_anims', 3089)": {'mod': [3096]}, "('CompareTwoSituations', 'construct', 3289)": {'mod': [3293]}, "('SkepticalOfDistributions', 'get_binomial', 3417)": {'mod': [3421]}}}, {'path': 'example_scenes.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [3, 5, 6, 7, 8, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 24]}, "('WarpSquare', 'construct', 47)": {'mod': [50]}}}, {'path': 'extract_scene.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2], 'mod': [14, 15, 16]}, "(None, 'main', 201)": {'mod': [228]}}}, {'path': 'helpers.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [13]}, "(None, 'make_even_by_cycling', 486)": {'mod': [491, 492]}}}, {'path': 'mobject/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [7, 8, 9, 10]}}}, {'path': 'mobject/image_mobject.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [8, 9]}}}, {'path': 'mobject/mobject.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [8]}, "('Mobject', 'apply_complex_function', 200)": {'mod': [202]}, "('Mobject', 'align_submobjects', 759)": {'mod': [764]}}}, {'path': 'mobject/point_cloud_mobject.py', 'status': 'modified', 'Loc': {"('PMobject', 'pointwise_become_partial', 149)": {'mod': [152]}}}, {'path': 'mobject/region.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [6]}}}, {'path': 'mobject/svg_mobject.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [4]}, "('SVGMobject', 'circle_to_mobject', 116)": {'mod': [121]}}}, {'path': 'mobject/tex_mobject.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 8], 'mod': [3, 4]}}}, {'path': 'mobject/vectorized_mobject.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0, 5], 'mod': [3]}, "('VMobject', 'set_points_as_corners', 177)": {'mod': [183]}}}, {'path': 'old_projects/bell.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0], 'mod': [32]}, "('PhotonsThroughPerpendicularFilters', 'get_photons', 213)": {'mod': [223]}, "('PhotonsThroughPerpendicularFilters', 'get_probability_text', 226)": {'mod': [247]}, "('ShowVariousFilterPairsWithPhotonsOverTime', None, 615)": {'mod': [620]}, "('ShowVariousFilterPairs', 'get_lines', 859)": {'mod': [868]}, "('ShowVariousFilterPairsFrom0To45', 'mention_probabilities', 898)": {'mod': [908]}, "('ForgetPreviousActions', None, 921)": {'mod': [926]}, "('VennDiagramProofByContradiction', 'draw_venn_diagram', 1395)": {'mod': [1423]}, "('VennDiagramProofByContradiction', 'setup_venn_diagram_sections', 1998)": {'mod': [2006]}, "('NoFirstMeasurementPreferenceBasedOnDirection', None, 2408)": {'mod': [2413]}}}, {'path': 'old_projects/borsuk.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [27]}, "('WalkEquatorPostTransform', 'get_transverse_curve', 966)": {'mod': [972]}, "('ChoicesInNecklaceCutting', 'get_groups', 1827)": {'mod': [1847]}, "('ChoicesInNecklaceCutting', 'get_boxes_and_labels', 1852)": {'mod': [1864]}, "('NecklaceDivisionSphereAssociation', 'show_binary_choice_association', 2101)": {'mod': [2112]}, "('TotalLengthOfEachJewelEquals', 'demonstrate_fair_division', 2228)": {'mod': [2245]}, "('ShowFunctionDiagram', 'add_number_pair', 2327)": {'mod': [2333]}}}, {'path': 'old_projects/brachistochrone/curves.py', 'status': 'modified', 'Loc': {"('TransitionAwayFromSlide', 'construct', 368)": {'mod': [376]}}}, {'path': 'old_projects/brachistochrone/cycloid.py', 'status': 'modified', 'Loc': {"('CycloidScene', 'grow_parts', 57)": {'mod': [60]}, "('LeviSolution', 'show_diameter', 289)": {'mod': [319]}}}, {'path': 'old_projects/brachistochrone/drawing_images.py', 'status': 'modified', 'Loc': {"('NewtonVsJohann', 'construct', 275)": {'mod': [278]}, "('JohannThinksOfFermat', 'construct', 297)": {'mod': [300]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"eop/bayes.py",
"mobject/svg_mobject.py",
"mobject/tex_mobject.py",
"animation/playground.py",
"eop/bayes_footnote.py",
"mobject/__init__.py",
"active_projects/WindingNumber.py",
"animation/transform.py",
"eop/combinations.py",
"helpers.py",
"animation/__init__.py",
"mobject/mobject.py",
"eop/independence.py",
"old_projects/brachistochrone/drawing_images.py",
"mobject/image_mobject.py",
"camera/camera.py",
"animation/simple_animations.py",
"mobject/region.py",
"old_projects/brachistochrone/curves.py",
"camera/__init__.py",
"mobject/point_cloud_mobject.py",
"old_projects/brachistochrone/cycloid.py",
"mobject/vectorized_mobject.py",
"active_projects/fourier.py",
"old_projects/borsuk.py",
"old_projects/bell.py",
"active_projects/basel.py",
"extract_scene.py",
"example_scenes.py",
"animation/continual_animation.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 | |
All-Hands-AI | OpenHands | cf439fa89cf45a5462336a10c3dfee4ab4c0ace8 | https://github.com/All-Hands-AI/OpenHands/issues/7060 | bug
openhands | [Bug]: Obsolete attribute in a unit test file | ### Is there an existing issue for the same bug?
- [x] I have checked the existing issues.
### Describe the bug and reproduction steps
openhands-agent,
The file test_long_term_memory.py uses an attribute 'micro_agent_name' which is obsolete and has been removed from AgentConfig.
Please remove it from the tests too.
You ONLY need to work with test_long_term_memory.py, no other files, I took care of everything else.
### OpenHands Installation
Other
### OpenHands Version
_No response_
### Operating System
None
### Logs, Errors, Screenshots, and Additional Context
_No response_ | null | https://github.com/All-Hands-AI/OpenHands/pull/7061 | null | {'base_commit': 'cf439fa89cf45a5462336a10c3dfee4ab4c0ace8', 'files': [{'path': 'tests/unit/test_long_term_memory.py', 'status': 'modified', 'Loc': {"(None, 'mock_agent_config', 24)": {'mod': [26]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [
"tests/unit/test_long_term_memory.py"
],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 6e3b554317de7bc5d96ef81b4097287e05c0c4d0 | https://github.com/All-Hands-AI/OpenHands/issues/226 | enhancement
backend | Redesign docker sandbox | **What problem or use case are you trying to solve?**
We're using `exec_run` to run commands in the sandbox. This isn't stateful, and doesn't handle CLI interactions via stdin very well.
Things we struggle with today:
* We don't keep track of cd commands
* The agent can't interact with stdin (e.g. it runs apt-get install without -y, it wants to type y to get through)
* this is more important if we e.g. ask the agent to develop an interactive CLI that it needs to test
* [Can't use apt-get install in sandbox](https://github.com/OpenDevin/OpenDevin/issues/216) (due to permissions)
* [kill doesn't work](https://github.com/OpenDevin/OpenDevin/issues/179)
**Describe the UX of the solution you'd like**
Something closer to @xingyaoww 's original implementation: https://github.com/xingyaoww/OpenDevin/blob/8815aa95ba770110e9d6a4839fb7f9cef01ef4d7/opendevin/sandbox/docker.py
**Do you have thoughts on the technical implementation?**
Can we start the container, then connect an ssh or pty session?
**Describe alternatives you've considered**
* Hacking around `exec` 👎
| null | https://github.com/All-Hands-AI/OpenHands/pull/847 | null | {'base_commit': '6e3b554317de7bc5d96ef81b4097287e05c0c4d0', 'files': [{'path': 'opendevin/sandbox/Dockerfile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [16, 17]}}}, {'path': 'opendevin/sandbox/Makefile', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [4]}}}, {'path': 'opendevin/sandbox/sandbox.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [6], 'mod': [11]}, "('DockerInteractive', '__init__', 93)": {'add': [134, 136]}, "('DockerInteractive', None, 88)": {'add': [148]}, "('DockerInteractive', 'restart_docker_container', 255)": {'add': [273], 'mod': [270]}, "('DockerInteractive', 'setup_devin_user', 139)": {'mod': [141, 142, 143, 144, 145]}, "('DockerInteractive', 'get_exec_cmd', 149)": {'mod': [151]}, "('DockerInteractive', 'execute', 161)": {'mod': [162, 163, 164, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, 181]}}}, {'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3365, 3598, 3911, 3916, 3921, 3926, 3931, 3949], 'mod': [5877]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [25]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"opendevin/sandbox/sandbox.py"
],
"doc": [],
"test": [],
"config": [
"pyproject.toml",
"opendevin/sandbox/Makefile",
"opendevin/sandbox/Dockerfile",
"poetry.lock"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 07f0d1ccb347d1c67a189d53c7147916d05cd528 | https://github.com/All-Hands-AI/OpenHands/issues/4783 | bug
fix-me | [Bug]: Tool call metadata should NOT be None when function calling is enabled | ### Is there an existing issue for the same bug?
- [X] I have checked the existing issues.
### Describe the bug and reproduction steps
1. Manually run command in the client terminal (e.g., `pwd`)
2. Error is thrown
### OpenHands Installation
Docker command in README
### OpenHands Version
main
### Operating System
None
### Logs, Errors, Screenshots, and Additional Context
<img width="395" alt="image" src="https://github.com/user-attachments/assets/9afa3669-863f-4d16-97ff-9e5f21fffd3e">
| null | https://github.com/All-Hands-AI/OpenHands/pull/4955 | null | {'base_commit': '07f0d1ccb347d1c67a189d53c7147916d05cd528', 'files': [{'path': 'openhands/agenthub/codeact_agent/codeact_agent.py', 'status': 'modified', 'Loc': {"('CodeActAgent', 'get_action_message', 112)": {'add': [186], 'mod': [151, 156]}, "('CodeActAgent', 'get_observation_message', 189)": {'mod': [222, 223, 224]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/agenthub/codeact_agent/codeact_agent.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 123968f887a5eb101b549472805e4b9e4ac7bce0 | https://github.com/All-Hands-AI/OpenHands/issues/1686 | bug
severity:low | [Bug]: Error creating controller | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
I followed the quickstart guide and was able to open the UI, but I keep getting "Error creating controller". I checked the troubleshooting doc and verified that Docker is running using `docker ps`. I also checked existing issues and saw people saying that modifying the config.toml file with a line saying `SANDBOX_TYPE="exec"` might fix it. However with the (new?) installation method through Docker, there are no files to modify as the image is already made. Another thing I thought it might be is that I'm on Windows and WSL might not have the right permissions set? I'm not sure how to troubleshoot that.
### Current Version
```bash
Docker Desktop 4.29.0 (145265)
```
### Installation and Configuration
```bash
Alyssa@LAPTOP-U1RNRHQR MINGW64 ~
$ cd C:/Users/Alyssa/Documents/opendevintesting
Alyssa@LAPTOP-U1RNRHQR MINGW64 ~/Documents/opendevintesting
$ docker run \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e WORKSPACE_MOUNT_PATH="C:\Users\Alyssa\Documents\opendevintesting" \
-v "C:\Users\Alyssa\Documents\opendevintesting:/opt/workspace_base" \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
ghcr.io/opendevin/opendevin:0.5
0.5: Pulling from opendevin/opendevin
Digest: sha256:322c5ddcc40f0ac3b6727f63dda9fab87fea3cc1e90a1359f7229529a2c89684
Status: Image is up to date for ghcr.io/opendevin/opendevin:0.5
useradd warning: enduser's uid 197611 outside of the UID_MIN 499 and UID_MAX 60000 range.
stat: cannot statx '/var/run/docker.sock': No such file or directory
Docker socket group id:
Usage: usermod [options] LOGIN
Options:
-a, --append append the user to the supplemental GROUPS
mentioned by the -G option without removing
the user from other groups
-b, --badname allow bad names
-c, --comment COMMENT new value of the GECOS field
-d, --home HOME_DIR new home directory for the user account
-e, --expiredate EXPIRE_DATE set account expiration date to EXPIRE_DATE
-f, --inactive INACTIVE set password inactive after expiration
to INACTIVE
-g, --gid GROUP force use GROUP as new primary group
-G, --groups GROUPS new list of supplementary GROUPS
-h, --help display this help message and exit
-l, --login NEW_LOGIN new value of the login name
-L, --lock lock the user account
-m, --move-home move contents of the home directory to the
new location (use only with -d)
-o, --non-unique allow using duplicate (non-unique) UID
-p, --password PASSWORD use encrypted password for the new password
-P, --prefix PREFIX_DIR prefix directory where are located the /etc/* files
-r, --remove remove the user from only the supplemental GROUPS
mentioned by the -G option without removing
the user from other groups
-R, --root CHROOT_DIR directory to chroot into
-s, --shell SHELL new login shell for the user account
-u, --uid UID new UID for the user account
-U, --unlock unlock the user account
-v, --add-subuids FIRST-LAST add range of subordinate uids
-V, --del-subuids FIRST-LAST remove range of subordinate uids
-w, --add-subgids FIRST-LAST add range of subordinate gids
-W, --del-subgids FIRST-LAST remove range of subordinate gids
-Z, --selinux-user SEUSER new SELinux user mapping for the user account
INFO: Started server process [27]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:3000 (Press CTRL+C to quit)
INFO: 172.17.0.1:54844 - "GET / HTTP/1.1" 307 Temporary Redirect
INFO: ('172.17.0.1', 54856) - "WebSocket /ws?token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzaWQiOiIzOWZhODFmYS02YjhiLTQzMzYtODBjZi0zNzU0NjQ3ZTg0MDAifQ.5XamgoC0qQvuxmY_WKRufEKkSBrWNHJcvsB8NR_RycE" [accepted]
INFO: connection open
06:56:44 - opendevin:INFO: agent.py:125 - Creating agent CodeActAgent using LLM gpt-3.5-turbo
06:56:44 - opendevin:INFO: llm.py:78 - Initializing LLM with model: gpt-3.5-turbo
06:56:44 - opendevin:INFO: ssh_box.py:68 - SSHBox is running as opendevin user with USER_ID=197611 in the sandbox
06:56:44 - opendevin:ERROR: ssh_box.py:75 - Error creating controller. Please check Docker is running and visit `https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting` for more debugging information.
06:56:44 - opendevin:ERROR: agent.py:138 - Error creating controller: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 400, in request
self.endheaders()
File "/usr/local/lib/python3.12/http/client.py", line 1331, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1091, in _send_output
self.send(msg)
File "/usr/local/lib/python3.12/http/client.py", line 1035, in send
self.connect()
File "/app/.venv/lib/python3.12/site-packages/docker/transport/unixconn.py", line 27, in connect
sock.connect(self.unix_socket)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/util/retry.py", line 470, in increment
raise reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/util/util.py", line 38, in reraise
raise value.with_traceback(tb)
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 400, in request
self.endheaders()
File "/usr/local/lib/python3.12/http/client.py", line 1331, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1091, in _send_output
self.send(msg)
File "/usr/local/lib/python3.12/http/client.py", line 1035, in send
self.connect()
File "/app/.venv/lib/python3.12/site-packages/docker/transport/unixconn.py", line 27, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/docker/api/client.py", line 213, in _retrieve_server_version
return self.version(api_version=False)["ApiVersion"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/utils/decorators.py", line 44, in inner
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/api/client.py", line 236, in _get
return self.get(url, **self._set_request_timeout(kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/opendevin/server/agent/agent.py", line 130, in create_controller
self.controller = AgentController(
^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/agent_controller.py", line 82, in __init__
self.action_manager = ActionManager(self.id)
^^^^^^^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/action_manager.py", line 39, in __init__
self.sandbox = DockerSSHBox(
^^^^^^^^^^^^^
File "/app/opendevin/runtime/docker/ssh_box.py", line 79, in __init__
raise ex
File "/app/opendevin/runtime/docker/ssh_box.py", line 73, in __init__
self.docker_client = docker.from_env()
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/client.py", line 94, in from_env
return cls(
^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/client.py", line 45, in __init__
self.api = APIClient(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/api/client.py", line 197, in __init__
self._version = self._retrieve_server_version()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/api/client.py", line 220, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
06:56:44 - opendevin:INFO: agent_controller.py:201 - Setting agent state from AgentState.LOADING to AgentState.INIT
Starting loop_recv for sid: 39fa81fa-6b8b-4336-80cf-3754647e8400
INFO: 172.17.0.1:54844 - "GET /api/refresh-files HTTP/1.1" 200 OK
INFO: 172.17.0.1:54872 - "GET /api/litellm-models HTTP/1.1" 200 OK
INFO: 172.17.0.1:54886 - "GET /api/messages/total HTTP/1.1" 200 OK
INFO: 172.17.0.1:54886 - "GET /api/agents HTTP/1.1" 200 OK
INFO: 172.17.0.1:54886 - "DELETE /api/messages HTTP/1.1" 200 OK
06:57:16 - opendevin:INFO: agent.py:125 - Creating agent CodeActAgent using LLM gpt-4-turbo
06:57:16 - opendevin:INFO: llm.py:78 - Initializing LLM with model: gpt-4-turbo
06:57:16 - opendevin:WARNING: stream.py:30 - Subscriber subscribed multiple times: agent_controller
06:57:16 - opendevin:INFO: ssh_box.py:68 - SSHBox is running as opendevin user with USER_ID=197611 in the sandbox
06:57:16 - opendevin:ERROR: ssh_box.py:75 - Error creating controller. Please check Docker is running and visit `https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting` for more debugging information.
06:57:16 - opendevin:ERROR: agent.py:138 - Error creating controller: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 400, in request
self.endheaders()
File "/usr/local/lib/python3.12/http/client.py", line 1331, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1091, in _send_output
self.send(msg)
File "/usr/local/lib/python3.12/http/client.py", line 1035, in send
self.connect()
File "/app/.venv/lib/python3.12/site-packages/docker/transport/unixconn.py", line 27, in connect
sock.connect(self.unix_socket)
FileNotFoundError: [Errno 2] No such file or directory
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/util/retry.py", line 470, in increment
raise reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/util/util.py", line 38, in reraise
raise value.with_traceback(tb)
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/app/.venv/lib/python3.12/site-packages/urllib3/connection.py", line 400, in request
self.endheaders()
File "/usr/local/lib/python3.12/http/client.py", line 1331, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/lib/python3.12/http/client.py", line 1091, in _send_output
self.send(msg)
File "/usr/local/lib/python3.12/http/client.py", line 1035, in send
self.connect()
File "/app/.venv/lib/python3.12/site-packages/docker/transport/unixconn.py", line 27, in connect
sock.connect(self.unix_socket)
urllib3.exceptions.ProtocolError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/app/.venv/lib/python3.12/site-packages/docker/api/client.py", line 213, in _retrieve_server_version
return self.version(api_version=False)["ApiVersion"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/api/daemon.py", line 181, in version
return self._result(self._get(url), json=True)
^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/utils/decorators.py", line 44, in inner
return f(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/api/client.py", line 236, in _get
return self.get(url, **self._set_request_timeout(kwargs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 602, in get
return self.request("GET", url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/requests/adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/app/opendevin/server/agent/agent.py", line 130, in create_controller
self.controller = AgentController(
^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/agent_controller.py", line 82, in __init__
self.action_manager = ActionManager(self.id)
^^^^^^^^^^^^^^^^^^^^^^
File "/app/opendevin/controller/action_manager.py", line 39, in __init__
self.sandbox = DockerSSHBox(
^^^^^^^^^^^^^
File "/app/opendevin/runtime/docker/ssh_box.py", line 79, in __init__
raise ex
File "/app/opendevin/runtime/docker/ssh_box.py", line 73, in __init__
self.docker_client = docker.from_env()
^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/client.py", line 94, in from_env
return cls(
^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/client.py", line 45, in __init__
self.api = APIClient(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/api/client.py", line 197, in __init__
self._version = self._retrieve_server_version()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/app/.venv/lib/python3.12/site-packages/docker/api/client.py", line 220, in _retrieve_server_version
raise DockerException(
docker.errors.DockerException: Error while fetching server API version: ('Connection aborted.', FileNotFoundError(2, 'No such file or directory'))
06:57:16 - opendevin:INFO: agent_controller.py:201 - Setting agent state from AgentState.INIT to AgentState.INIT
```
### Model and Agent
- Model: gpt-4-turbo
- Agent: CodeActAgent
### Reproduction Steps
_No response_
### Logs, Errors, Screenshots, and Additional Context
_No response_ | null | https://github.com/All-Hands-AI/OpenHands/pull/1788 | null | {'base_commit': '123968f887a5eb101b549472805e4b9e4ac7bce0', 'files': [{'path': 'containers/app/Dockerfile', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [47]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [],
"test": [],
"config": [
"containers/app/Dockerfile"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 32ee6a5a646454a9dc2dae43275313e2d6f77073 | https://github.com/All-Hands-AI/OpenHands/issues/6440 | bug | [Bug]: KeyError: 'ExposedPorts' | ### Is there an existing issue for the same bug?
- [x] I have checked the existing issues.
### Describe the bug and reproduction steps
```
23:07:30 - openhands:ERROR: session.py:128 - Error creating agent_session: 'ExposedPorts'
Traceback (most recent call last):
File "/workspaces/OpenHands/openhands/server/session/session.py", line 115, in initialize_agent
await self.agent_session.start(
File "/workspaces/OpenHands/openhands/server/session/agent_session.py", line 98, in start
await self._create_runtime(
File "/workspaces/OpenHands/openhands/server/session/agent_session.py", line 212, in _create_runtime
await self.runtime.connect()
File "/workspaces/OpenHands/openhands/runtime/impl/docker/docker_runtime.py", line 120, in connect
await call_sync_from_async(self._attach_to_container)
File "/workspaces/OpenHands/openhands/utils/async_utils.py", line 18, in call_sync_from_async
result = await coro
^^^^^^^^^^
File "/usr/local/python/3.12.1/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspaces/OpenHands/openhands/utils/async_utils.py", line 17, in <lambda>
coro = loop.run_in_executor(None, lambda: fn(*args, **kwargs))
^^^^^^^^^^^^^^^^^^^
File "/workspaces/OpenHands/openhands/runtime/impl/docker/docker_runtime.py", line 321, in _attach_to_container
for exposed_port in config['ExposedPorts'].keys():
~~~~~~^^^^^^^^^^^^^^^^
KeyError: 'ExposedPorts'
```
### OpenHands Installation
Docker command in README
### OpenHands Version
main
### Operating System
None
### Logs, Errors, Screenshots, and Additional Context
_No response_ | null | https://github.com/All-Hands-AI/OpenHands/pull/6460 | null | {'base_commit': '32ee6a5a646454a9dc2dae43275313e2d6f77073', 'files': [{'path': 'openhands/core/config/sandbox_config.py', 'status': 'modified', 'Loc': {"('SandboxConfig', None, 6)": {'mod': [75]}}}, {'path': 'openhands/runtime/impl/docker/docker_runtime.py', 'status': 'modified', 'Loc': {"('DockerRuntime', '_attach_to_container', 318)": {'mod': [330, 331, 332, 333]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/runtime/impl/docker/docker_runtime.py",
"openhands/core/config/sandbox_config.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | f9088766e826e208195345a7fcde4920a87df3dd | https://github.com/All-Hands-AI/OpenHands/issues/3527 | bug | [Bug]: openhands-ai Python package requires agenthub | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
When attempting to use the openhands-ai package from PyPi, I encounter an issue that `agenthub` cannot be imported. I believe this is because `agenthub` is imported, but it does not exist as part of the package in PyPi.
### Current OpenHands version
```bash
openhands-ai 0.8.3
```
### Installation and Configuration
I ran `poetry install openhands-ai`, then installed missing dependencies, then attempted to use it. Specifically it is failing on the import of `openhands.core.main`.
```bash
from openhands.controller.state.state import State
from openhands.core.config import AppConfig, SandboxConfig
from openhands.core.main import run_controller
from openhands.runtime import get_runtime_cls
```
### Model and Agent
_No response_
### Operating System
MacOS
### Reproduction Steps
1. Clone https://github.com/mattbarlow-sg/openhands-test
2. Run `poetry install`
3. Run `poetry shell`
4. Run `openhands-package`
### Logs, Errors, Screenshots, and Additional Context
```
ERROR:root: File "/Users/matt.barlow/Library/Caches/pypoetry/virtualenvs/openhands-package-8SiZbAsB-py3.12/bin/openhands-package", line 3, in <module>
from openhands_package.cli import ai_tools
File "/Users/matt.barlow/Engineering/openhands-package/openhands_package/__init__.py", line 1, in <module>
from .cli import main
File "/Users/matt.barlow/Engineering/openhands-package/openhands_package/cli.py", line 7, in <module>
from openhands.core.main import run_controller
File "/Users/matt.barlow/Library/Caches/pypoetry/virtualenvs/openhands-package-8SiZbAsB-py3.12/lib/python3.12/site-packages/openhands/core/main.py", line 7, in <module>
import agenthub # noqa F401 (we import this to get the agents registered)
^^^^^^^^^^^^^^^
ERROR:root:<class 'ModuleNotFoundError'>: No module named 'agenthub'
``` | null | https://github.com/All-Hands-AI/OpenHands/pull/3548 | null | {'base_commit': 'f9088766e826e208195345a7fcde4920a87df3dd', 'files': [{'path': 'openhands/runtime/utils/runtime_build.py', 'status': 'modified', 'Loc': {"(None, '_create_project_source_dist', 34)": {'mod': [62]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [9, 45], 'mod': [2, 69, 70, 71, 72, 85, 86, 87, 88]}}}, {'path': 'tests/unit/test_runtime_build.py', 'status': 'modified', 'Loc': {"(None, '_check_source_code_in_dir', 28)": {'add': [38], 'mod': [54]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/runtime/utils/runtime_build.py"
],
"doc": [],
"test": [
"tests/unit/test_runtime_build.py"
],
"config": [
"pyproject.toml"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 356caf0960df558be438f8c3e357e808c0619238 | https://github.com/All-Hands-AI/OpenHands/issues/1514 | enhancement
severity:low | Micro-agent: typo checker | **What problem or use case are you trying to solve?**
Micro-agents are small agents that specialize in one field. You don't have to write code to define a new micro-agent! Take a look at existing micro-agents: https://github.com/OpenDevin/OpenDevin/tree/main/agenthub/micro
We could add a new micro-agent that scans file(s) with the given path, (or maybe the current workspace?) and **just fix the typos** in-place. Motivation: typos are everywhere. Most project owners welcome PRs that fix typos, but few of them are happy to see their doc and/or docstring gets completely rewritten & polished by LLMs.
**Do you have thoughts on the technical implementation?**
We should think about how we want to prompt the LLM to fix the typos. A naive approach is to let LLM review each document and return new document with typos fixed. This might waste a lot of output tokens. An alternative is to instruct LLM to return (typo, fix) pairs, and then use `sed` or `awk` to fix them in-place. This might need some experiments. Both approaches could cause false positives.
| null | https://github.com/All-Hands-AI/OpenHands/pull/1613 | null | {'base_commit': '356caf0960df558be438f8c3e357e808c0619238', 'files': [{'path': 'agenthub/micro/agent.py', 'status': 'modified', 'Loc': {"(None, 'parse_response', 16)": {'mod': [17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"agenthub/micro/agent.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 97e938d5450728128ccbf896ecbc5963ac223012 | https://github.com/All-Hands-AI/OpenHands/issues/6382 | bug
awaiting release | [Bug]: The sandbox container is being recreated when rejoining an existing conversation (all changes are lost) | ### Is there an existing issue for the same bug?
- [x] I have checked the existing issues.
### Expected result
- When joining an existing conversation, OH must start the same container (already created for this conversation), instead of creating a new one from scratch.
- Each conversation must have their own exclusive container.
- `keep_runtime_alive = 1` should also be the default IMO.
- Restarting OH must also keep the sandbox containers.
- The sandbox containers can be destroyed once the conversation/session is deleted.
### Describe the bug and reproduction steps
With `keep_runtime_alive = 0`:
The sandbox container is being recreated when rejoining an existing conversation and all changes are lost.
With `keep_runtime_alive = 1`:
The container is not destroyed, but the same sandbox container is shared for all conversations, which is alsoincorrect.
### OpenHands Installation
Docker command in README
### OpenHands Version
main (2025-01-21)
### Operating System
WSL on Windows
### Test case
- Start a conversation
- Use `docker ps` to get the container ID
- Restart OH and resume the conversation
- Use `docker ps` to get the container ID and confirm it's the same | null | https://github.com/All-Hands-AI/OpenHands/pull/6402 | null | {'base_commit': 'b468150f2abf0f4c8bcf05072f808dd8a086e9c6', 'files': [{'path': 'openhands/runtime/impl/docker/docker_runtime.py', 'status': 'modified', 'Loc': {"('DockerRuntime', '__init__', 57)": {'mod': [69]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/runtime/impl/docker/docker_runtime.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 5f61885e44cf1841fe9ec82befd38cf45b13869b | https://github.com/All-Hands-AI/OpenHands/issues/2866 | bug | [Bug]: azure open ai config | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
Based on the documentation I ran the azure open ai config, I managed to open the ui but i got :
Agent encountered an error.
Would it be possible to give an example of azure openai configuration,
this one is not so clear : https://docs.all-hands.dev/modules/usage/llms/azureLLMs#azure-openai-configs
### Current OpenDevin version
```bash
ghcr.io/opendevin/opendevin
```
### Installation and Configuration
```bash
I ran this command in the terminal:
WORKSPACE_BASE=$(pwd)/workspace
docker run -it \
--pull=always \
-e SANDBOX_USER_ID=$(id -u) \
-e WORKSPACE_MOUNT_PATH=$WORKSPACE_BASE \
-e LLM_BASE_URL="https://xxx.openai.azure.com/" \
-v $WORKSPACE_BASE:/opt/workspace_base \
-v /var/run/docker.sock:/var/run/docker.sock \
-p 3000:3000 \
--add-host host.docker.internal:host-gateway \
--name opendevin-app-$(date +%Y%m%d%H%M%S) \
ghcr.io/opendevin/opendevin
```
### Model and Agent
_No response_
### Operating System
WSL
### Reproduction Steps
_No response_
### Logs, Errors, Screenshots, and Additional Context
File "/app/.venv/lib/python3.12/site-packages/litellm/utils.py", line 7496, in exception_type
raise APIConnectionError(
litellm.exceptions.APIConnectionError: litellm.APIConnectionError: AzureException APIConnectionError - 'NoneType' object has no attribute 'split' | null | https://github.com/All-Hands-AI/OpenHands/pull/2894 | null | {'base_commit': '5f61885e44cf1841fe9ec82befd38cf45b13869b', 'files': [{'path': 'docs/modules/usage/llms/azureLLMs.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [17], 'mod': [15, 35, 36]}}}, {'path': 'docs/modules/usage/llms/localLLMs.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [43]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [],
"doc": [
"docs/modules/usage/llms/localLLMs.md",
"docs/modules/usage/llms/azureLLMs.md"
],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 3661893161826c2a36bacdb3b08d12c805134bee | https://github.com/All-Hands-AI/OpenHands/issues/4142 | documentation
enhancement
fix-me | Documentation: Create a "Usage Methods -> GUI Mode" page | **What problem or use case are you trying to solve?**
Currently we have pages about different usage methods, CLI and headless, and soon to by github actions (#4113).
However, we don't have a page describing GUI mode, other than the Getting Started page. We can start out by copying the information from the "Getting Started" page and then after we do that add more information about how to interact with the GUI. | null | https://github.com/All-Hands-AI/OpenHands/pull/4156 | null | {'base_commit': '3661893161826c2a36bacdb3b08d12c805134bee', 'files': [{'path': 'docs/sidebars.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [24]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"docs/sidebars.ts"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | bfa1de4a6b18d3b8493b94f6e54e360012957fdc | https://github.com/All-Hands-AI/OpenHands/issues/2714 | bug
good first issue | [Bug]: The long filename will stretch the workspace panel | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://opendevin.github.io/OpenDevin/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
The issue manifests as follows:
<img width="1513" alt="image" src="https://github.com/OpenDevin/OpenDevin/assets/16201837/3468e1cc-352a-4483-a883-d6a37a11157e">
We can limit the maximum display length, or allow the panel to freely adjust its width and scroll along the x and y axes.
### Current OpenDevin version
```bash
0.7.0
```
### Installation and Configuration
```bash
Default configuration.
```
### Model and Agent
_No response_
### Operating System
_No response_
### Reproduction Steps
_No response_
### Logs, Errors, Screenshots, and Additional Context
_No response_ | null | https://github.com/All-Hands-AI/OpenHands/pull/2731 | null | {'base_commit': 'bfa1de4a6b18d3b8493b94f6e54e360012957fdc', 'files': [{'path': 'frontend/src/components/file-explorer/FileExplorer.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [219, 222]}}}, {'path': 'frontend/src/components/file-explorer/TreeNode.tsx', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [23, 24, 25]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"frontend/src/components/file-explorer/FileExplorer.tsx",
"frontend/src/components/file-explorer/TreeNode.tsx"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 93d2e4a338adcaa8acaa602adad14364abca821f | https://github.com/All-Hands-AI/OpenHands/issues/3903 | bug | [Bug]: LocalBox has been removed from 0.9.0 | ### Is there an existing issue for the same bug?
- [X] I have checked the troubleshooting document at https://docs.all-hands.dev/modules/usage/troubleshooting
- [X] I have checked the existing issues.
### Describe the bug
Hey team,
We built our setup based on local sandbox in Openshift with restricted permission. We did it after this discusion https://github.com/All-Hands-AI/OpenHands/discussions/2675
But we found there is no local sandbox in v. 0.9.0+ and it brakes our setup :(
Is there a replacement for it or would it be possible to revert this changes?
Many thanks!
### Current OpenHands version
```bash
0.9.0+
```
### Installation and Configuration
We 've written own Dockerfile based on yours:
```bash
FROM ghcr.io/opendevin/opendevin:0.7
RUN chmod 777 -R /app
ENTRYPOINT []
USER root
# install basic packages
RUN apt-get update && apt-get install -y \
curl \
wget \
git \
vim \
nano \
unzip \
zip \
python3 \
python3-pip \
python3-venv \
python3-dev \
build-essential \
openssh-server \
sudo \
gcc \
jq \
g++ \
make \
iproute2 \
&& rm -rf /var/lib/apt/lists/*
RUN mkdir -p -m0755 /var/run/sshd
# symlink python3 to python
RUN ln -s /usr/bin/python3 /usr/bin/python
# ==== OpenDevin Runtime Client ====
RUN mkdir -p /opendevin && mkdir -p /opendevin/logs && chmod 777 /opendevin/logs
RUN wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
RUN bash Miniforge3-$(uname)-$(uname -m).sh -b -p /opendevin/miniforge3
RUN chmod -R g+w /opendevin/miniforge3
RUN bash -c ". /opendevin/miniforge3/etc/profile.d/conda.sh && conda config --set changeps1 False && conda config --append channels conda-forge"
RUN echo "" > /opendevin/bash.bashrc
# - agentskills dependencies
RUN /opendevin/miniforge3/bin/pip install --upgrade pip
RUN /opendevin/miniforge3/bin/pip install jupyterlab notebook jupyter_kernel_gateway flake8
RUN /opendevin/miniforge3/bin/pip install python-docx PyPDF2 python-pptx pylatexenc openai
RUN chmod 777 -R /opendevin
RUN mkdir -p /opt/workspace_base && chmod -R 777 /opt/workspace_base
RUN sed "s/config.sandbox_type/\'local\'/g" -i /app/opendevin/runtime/runtime.py && sed '24,27{/.*/d}' -i /app/opendevin/runtime/plugins/mixin.py && mkdir /opendevin/plugins/ && cp -av /app/opendevin/runtime/plugins/jupyter /opendevin/plugins/ && cp -av /app/opendevin/runtime/plugins/agent_skills /opendevin/plugins/
RUN export PATH=/opendevin/miniforge3/bin:/app/.venv/bin:/usr/local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
RUN echo $PATH
RUN cd /app && playwright install
CMD ["uvicorn", "opendevin.server.listen:app", "--host", "0.0.0.0", "--port", "3000"]
```
We combined opendevin and sandbox into the same container, changed paths and permission.
This image works without root/docker etc so we were able to start it under restrictedv2 Openshift SCC
```
### Model and Agent
_No response_
### Operating System
_No response_
### Reproduction Steps
_No response_
### Logs, Errors, Screenshots, and Additional Context
_No response_ | null | https://github.com/All-Hands-AI/OpenHands/pull/5284 | null | {'base_commit': '93d2e4a338adcaa8acaa602adad14364abca821f', 'files': [{'path': '.github/workflows/ghcr-build.yml', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [236]}}}, {'path': 'openhands/runtime/README.md', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [111, 114]}}}, {'path': 'openhands/runtime/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [5]}, "(None, 'get_runtime_cls', 11)": {'add': [23]}}}, {'path': 'openhands/runtime/action_execution_server.py', 'status': 'modified', 'Loc': {"('ActionExecutor', None, 83)": {'add': [165]}, '(None, None, None)': {'mod': [69, 70, 71]}}}, {'path': 'openhands/runtime/plugins/jupyter/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "('JupyterPlugin', 'initialize', 22)": {'mod': [25, 26, 27, 33, 34, 35, 36, 37]}}}, {'path': 'openhands/runtime/plugins/vscode/__init__.py', 'status': 'modified', 'Loc': {"('VSCodePlugin', None, 18)": {'add': [21]}}}, {'path': 'openhands/runtime/utils/bash.py', 'status': 'modified', 'Loc': {"('BashSession', 'initialize', 184)": {'add': [189], 'mod': [187]}}}, {'path': 'openhands/runtime/utils/command.py', 'status': 'modified', 'Loc': {"(None, 'get_action_execution_server_startup_command', 14)": {'add': [19], 'mod': [35, 48, 50]}}}, {'path': 'openhands/runtime/utils/runtime_init.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [0]}, "(None, 'init_user_and_working_directory', 6)": {'add': [33]}}}, {'path': 'poetry.lock', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [3372, 3869, 7776, 10128], 'mod': [1, 231, 338, 1144, 1170, 1373, 1384, 1492, 1528, 1779, 3308, 3342, 3406, 3638, 3661, 4805, 5389, 6244, 6376, 6508, 6654, 6697, 7661, 8818, 9316, 9973, 10558]}}}, {'path': 'pyproject.toml', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [71]}}}, {'path': 'tests/runtime/conftest.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [15], 'mod': [6, 11, 279]}, "(None, 'get_runtime_classes', 130)": {'add': [133]}, "(None, '_get_sandbox_folder', 41)": {'mod': [41, 42, 43, 44, 45]}, "(None, '_load_runtime', 208)": {'mod': [219, 272]}}}, {'path': 'tests/runtime/test_bash.py', 'status': 'modified', 'Loc': {"(None, 'test_cmd_run', 211)": {'add': [227], 'mod': [212, 214]}, "(None, 'test_git_operation', 444)": {'add': [466, 484], 'mod': [444, 448, 449, 456, 458, 459]}, '(None, None, None)': {'mod': [10, 17]}, "(None, 'test_bash_command_env', 33)": {'mod': [34]}, "(None, 'test_bash_server', 45)": {'mod': [46, 67, 76]}, "(None, 'test_multiline_commands', 91)": {'mod': [92]}, "(None, 'test_multiple_multiline_commands', 112)": {'mod': [126]}, "(None, 'test_complex_commands', 157)": {'mod': [160]}, "(None, 'test_no_ps2_in_output', 171)": {'mod': [173]}, "(None, 'test_multiline_command_loop', 184)": {'mod': [198]}, "(None, 'test_run_as_user_correct_home_dir', 248)": {'mod': [249, 253]}, "(None, 'test_multi_cmd_run_in_single_line', 261)": {'mod': [262, 266]}, "(None, 'test_stateful_cmd', 272)": {'mod': [273, 283]}, "(None, 'test_failed_cmd', 288)": {'mod': [289]}, "(None, 'test_copy_single_file', 303)": {'mod': [304, 306]}, "(None, 'test_copy_directory_recursively', 333)": {'mod': [334, 336]}, "(None, 'test_copy_to_non_existent_directory', 362)": {'mod': [363, 365]}, "(None, 'test_overwrite_existing_file', 378)": {'mod': [379, 381]}, "(None, 'test_copy_non_existent_file', 406)": {'mod': [407, 409]}, "(None, 'test_copy_from_directory', 422)": {'mod': [423, 424]}, "(None, 'test_python_version', 502)": {'mod': [503]}, "(None, 'test_pwd_property', 516)": {'mod': [517]}, "(None, 'test_basic_command', 530)": {'mod': [531]}, "(None, 'test_interactive_command', 558)": {'mod': [559]}, "(None, 'test_long_output', 594)": {'mod': [595]}, "(None, 'test_long_output_exceed_history_limit', 608)": {'mod': [609]}, "(None, 'test_long_output_from_nested_directories', 624)": {'mod': [625]}, "(None, 'test_command_backslash', 649)": {'mod': [650]}, "(None, 'test_command_output_continuation', 676)": {'mod': [677]}, "(None, 'test_long_running_command_follow_by_execute', 714)": {'mod': [717]}, "(None, 'test_empty_command_errors', 757)": {'mod': [758]}, "(None, 'test_python_interactive_input', 770)": {'mod': [771]}, "(None, 'test_python_interactive_input_without_set_input', 798)": {'mod': [801]}, "(None, 'test_stress_long_output_with_soft_and_hard_timeout', 837)": {'mod': [840]}, "(None, 'test_bash_remove_prefix', 927)": {'mod': [928]}}}, {'path': 'tests/runtime/test_browsergym_envs.py', 'status': 'modified', 'Loc': {"(None, 'test_browsergym_eval_env', 31)": {'mod': [32]}}}, {'path': 'tests/runtime/test_browsing.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [20]}, "(None, 'test_simple_browse', 23)": {'mod': [24, 27, 28, 29]}}}, {'path': 'tests/runtime/test_edit.py', 'status': 'modified', 'Loc': {"(None, 'test_edit_from_scratch', 30)": {'mod': [31]}, "(None, 'test_edit', 70)": {'mod': [71]}, "(None, 'test_edit_long_file', 129)": {'mod': [130]}}}, {'path': 'tests/runtime/test_env_vars.py', 'status': 'modified', 'Loc': {"(None, 'test_env_vars_os_environ', 16)": {'mod': [18]}, "(None, 'test_env_vars_runtime_operations', 35)": {'mod': [36]}, "(None, 'test_env_vars_added_by_config', 70)": {'mod': [71]}, "(None, 'test_docker_runtime_env_vars_persist_after_restart', 86)": {'mod': [89]}}}, {'path': 'tests/runtime/test_images.py', 'status': 'modified', 'Loc': {"(None, 'test_bash_python_version', 14)": {'mod': [21]}, "(None, 'test_nodejs_22_version', 48)": {'mod': [55]}, "(None, 'test_go_version', 69)": {'mod': [76]}}}, {'path': 'tests/runtime/test_ipython.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [1]}, "(None, 'test_simple_cmd_ipython_and_fileop', 32)": {'mod': [33]}, "(None, 'test_ipython_multi_user', 104)": {'mod': [105]}, "(None, 'test_ipython_simple', 176)": {'mod': [177]}, "(None, 'test_ipython_package_install', 199)": {'mod': [201]}, "(None, 'test_ipython_file_editor_permissions_as_openhands', 234)": {'mod': [236]}, "(None, 'test_file_read_and_edit_via_oh_aci', 315)": {'mod': [316]}}}, {'path': 'tests/runtime/test_microagent.py', 'status': 'modified', 'Loc': {"(None, 'test_load_microagents_with_trailing_slashes', 75)": {'mod': [81]}, "(None, 'test_load_microagents_with_selected_repo', 115)": {'mod': [122]}, "(None, 'test_load_microagents_with_missing_files', 158)": {'mod': [177]}}}, {'path': 'tests/runtime/test_replay.py', 'status': 'modified', 'Loc': {"(None, 'test_simple_replay', 29)": {'mod': [34, 36]}, "(None, 'test_simple_gui_replay', 51)": {'mod': [62]}, "(None, 'test_replay_wrong_initial_state', 81)": {'mod': [90, 92]}, "(None, 'test_replay_basic_interactions', 115)": {'mod': [123]}}}, {'path': 'tests/runtime/test_stress_docker_runtime.py', 'status': 'modified', 'Loc': {"(None, 'test_stress_docker_runtime', 9)": {'mod': [10]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"openhands/runtime/__init__.py",
"openhands/runtime/plugins/jupyter/__init__.py",
"openhands/runtime/plugins/vscode/__init__.py",
"openhands/runtime/utils/bash.py",
"openhands/runtime/utils/runtime_init.py",
"tests/runtime/conftest.py",
"openhands/runtime/action_execution_server.py",
"openhands/runtime/utils/command.py"
],
"doc": [
"openhands/runtime/README.md"
],
"test": [
"tests/runtime/test_stress_docker_runtime.py",
"tests/runtime/test_browsing.py",
"tests/runtime/test_replay.py",
"tests/runtime/test_microagent.py",
"tests/runtime/test_bash.py",
"tests/runtime/test_browsergym_envs.py",
"tests/runtime/test_edit.py",
"tests/runtime/test_images.py",
"tests/runtime/test_ipython.py",
"tests/runtime/test_env_vars.py"
],
"config": [
"pyproject.toml",
".github/workflows/ghcr-build.yml",
"poetry.lock"
],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 0403b460f10207075b7472f5127bfdd4ab1a66f8 | https://github.com/All-Hands-AI/OpenHands/issues/272 | enhancement | Add latest tag for docker image | **What problem or use case are you trying to solve?**
Proposed [here](https://github.com/OpenDevin/OpenDevin/pull/263#issuecomment-2023918115). Better to add `latest` tag for image. Then user do not need to pull image at specific version. We also do not need to always change the tags in [code](https://github.com/OpenDevin/OpenDevin/blob/a9102382f6a56765eea34fdac0f04ca0f2305651/opendevin/sandbox/sandbox.py#L17).
**Describe the UX of the solution you'd like**
**Do you have thoughts on the technical implementation?**
Need the pipeline builder to set it.
**Describe alternatives you've considered**
**Additional context**
| null | https://github.com/All-Hands-AI/OpenHands/pull/290 | null | {'base_commit': '2def49e79409108eacb4e797f7fdc2422cc5bd19', 'files': [{'path': 'README.md', 'status': 'modified', 'Loc': {'(None, None, 32)': {'mod': [32]}}}, {'path': 'evaluation/SWE-bench/scripts/run_docker_interactive.sh', 'status': 'modified', 'Loc': {'(None, None, 3)': {'mod': [3]}}}, {'path': 'opendevin/README.md', 'status': 'modified', 'Loc': {'(None, None, 30)': {'mod': [30]}}}, {'path': 'opendevin/sandbox/sandbox.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'mod': [21]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"opendevin/sandbox/sandbox.py"
],
"doc": [
"README.md",
"opendevin/README.md",
"evaluation/SWE-bench/scripts/run_docker_interactive.sh"
],
"test": [],
"config": [],
"asset": []
} | 1 |
All-Hands-AI | OpenHands | 221a4e83f1e438950591d183b0a6e7c5e15de6be | https://github.com/All-Hands-AI/OpenHands/issues/2308 | enhancement | [Feature]: Confirmation Mode for Agent | **What problem or use case are you trying to solve?**
Context: https://opendevin.slack.com/archives/C06P5NCGSFP/p1717733829670139
If the agent is NOT operating inside a sandbox or if a user care a lot about not letting the agent mess around with their environment, we should better let user confirm action (command/code) before the agent execute them.
**Describe the UX of the solution you'd like**
Add a confirmation mode for OpenDevin: A checkbox on the frontend; when enabled (checked), the frontend will prompt for the user's approval for **every** action the agent wants to execute.
**Do you have thoughts on the technical implementation?**
When confirmation mode is on, we probably need to add a check in the agent controller for every "executable" action -- the action can only be sent off for execution when it receives user confirmation from the front end.
**Describe alternatives you've considered**
**Additional context**
| null | https://github.com/All-Hands-AI/OpenHands/pull/2774 | null | {'base_commit': '456690818c94a266935888f1e56e0afa2c4d5219', 'files': [{'path': 'frontend/package-lock.json', 'status': 'modified', 'Loc': {'(None, None, 20)': {'mod': [20]}, '(None, None, 8256)': {'mod': [8256, 8257, 8258]}}}, {'path': 'frontend/package.json', 'status': 'modified', 'Loc': {'(None, None, 19)': {'mod': [19]}}}, {'path': 'frontend/src/components/AgentControlBar.tsx', 'status': 'modified', 'Loc': {'(None, None, 19)': {'add': [19]}, '(None, None, 27)': {'add': [27]}, '(None, None, 29)': {'add': [29]}, '(None, None, 104)': {'mod': [104, 105]}, '(None, None, 107)': {'mod': [107, 108, 109, 110, 111, 112]}, '(None, None, 114)': {'mod': [114]}, '(None, None, 116)': {'mod': [116]}, '(None, None, 118)': {'mod': [118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139]}}}, {'path': 'frontend/src/components/AgentStatusBar.tsx', 'status': 'modified', 'Loc': {'(None, None, 60)': {'add': [60]}}}, {'path': 'frontend/src/components/chat/Chat.test.tsx', 'status': 'modified', 'Loc': {'(None, None, 3)': {'add': [3]}, '(None, None, 2)': {'mod': [2]}, '(None, None, 16)': {'mod': [16]}}}, {'path': 'frontend/src/components/chat/Chat.tsx', 'status': 'modified', 'Loc': {'(None, None, 2)': {'add': [2]}, '(None, None, 5)': {'add': [5]}, '(None, None, 8)': {'mod': [8]}, '(None, None, 12)': {'mod': [12]}}}, {'path': 'frontend/src/components/chat/ChatInterface.tsx', 'status': 'modified', 'Loc': {'(None, None, 126)': {'mod': [126]}, '(None, None, 172)': {'mod': [172]}}}, {'path': 'frontend/src/components/chat/ChatMessage.test.tsx', 'status': 'modified', 'Loc': {'(None, None, 32)': {'add': [32]}, '(None, None, 13)': {'mod': [13]}, '(None, None, 20)': {'mod': [20]}}}, {'path': 'frontend/src/components/chat/ChatMessage.tsx', 'status': 'modified', 'Loc': {'(None, None, 5)': {'add': [5]}, '(None, None, 8)': {'add': [8]}, '(None, None, 11)': {'add': [11]}, '(None, None, 60)': {'add': [60]}, '(None, None, 14)': {'mod': [14]}}}, {'path': 'frontend/src/components/modals/settings/SettingsForm.test.tsx', 'status': 'modified', 'Loc': {'(None, None, 11)': {'add': [11]}, '(None, None, 22)': {'add': [22]}, '(None, None, 30)': {'add': [30]}, '(None, None, 42)': {'add': [42]}, '(None, None, 47)': {'add': [47]}, '(None, None, 55)': {'add': [55]}, '(None, None, 74)': {'add': [74]}, '(None, None, 82)': {'add': [82]}, '(None, None, 87)': {'add': [87]}, '(None, None, 91)': {'add': [91]}}}, {'path': 'frontend/src/components/modals/settings/SettingsForm.tsx', 'status': 'modified', 'Loc': {'(None, None, 19)': {'add': [19]}, '(None, None, 30)': {'add': [30]}, '(None, None, 88)': {'add': [88]}, '(None, None, 1)': {'mod': [1]}}}, {'path': 'frontend/src/components/modals/settings/SettingsModal.test.tsx', 'status': 'modified', 'Loc': {'(None, None, 29)': {'add': [29]}, '(None, None, 35)': {'add': [35]}, '(None, None, 109)': {'add': [109]}, '(None, None, 199)': {'mod': [199]}}}, {'path': 'frontend/src/components/modals/settings/SettingsModal.tsx', 'status': 'modified', 'Loc': {'(None, None, 91)': {'add': [91]}, '(None, None, 172)': {'add': [172]}, '(None, None, 51)': {'mod': [51]}}}, {'path': 'frontend/src/i18n/translation.json', 'status': 'modified', 'Loc': {'(None, None, 569)': {'add': [569]}, '(None, None, 588)': {'add': [588]}, '(None, None, 683)': {'add': [683]}}}, {'path': 'frontend/src/services/actions.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [69], 'mod': [49, 55]}}}, {'path': 'frontend/src/services/session.test.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [22]}}}, {'path': 'frontend/src/services/settings.test.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [31, 48, 59], 'mod': [23]}}}, {'path': 'frontend/src/services/settings.ts', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [7, 9, 14, 53, 59, 93], 'mod': [72, 95, 96, 98]}}}, {'path': 'frontend/src/types/AgentState.tsx', 'status': 'modified', 'Loc': {'(None, None, 10)': {'add': [10]}}}, {'path': 'opendevin/controller/agent_controller.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [18, 23]}, "('AgentController', None, 46)": {'add': [51]}, "('AgentController', '__init__', 57)": {'add': [62, 94]}, "('AgentController', 'on_event', 151)": {'add': [172, 174]}, "('AgentController', 'set_agent_state_to', 188)": {'add': [207]}, "('AgentController', '_step', 247)": {'add': [348, 351]}, "('AgentController', 'set_initial_state', 364)": {'mod': [365, 370]}}}, {'path': 'opendevin/controller/state/state.py', 'status': 'modified', 'Loc': {"('State', None, 38)": {'add': [41]}}}, {'path': 'opendevin/core/config.py', 'status': 'modified', 'Loc': {"('AppConfig', None, 178)": {'add': [225]}}}, {'path': 'opendevin/core/schema/agent.py', 'status': 'modified', 'Loc': {"('AgentState', None, 4)": {'add': [39]}}}, {'path': 'opendevin/core/schema/config.py', 'status': 'modified', 'Loc': {"('ConfigType', None, 4)": {'add': [22]}}}, {'path': 'opendevin/core/schema/observation.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [46]}}}, {'path': 'opendevin/events/action/__init__.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [34], 'mod': [1]}}}, {'path': 'opendevin/events/action/action.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [1, 6]}}}, {'path': 'opendevin/events/action/commands.py', 'status': 'modified', 'Loc': {"('CmdRunAction', None, 10)": {'add': [14]}, "('IPythonRunCellAction', None, 29)": {'add': [33]}, '(None, None, None)': {'mod': [6]}}}]} | [] | [] | [] | {
"iss_type": "4",
"iss_reason": "2",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"frontend/src/components/modals/settings/SettingsForm.tsx",
"frontend/package-lock.json",
"frontend/src/components/chat/Chat.test.tsx",
"opendevin/core/config.py",
"frontend/package.json",
"frontend/src/components/AgentControlBar.tsx",
"opendevin/controller/state/state.py",
"frontend/src/components/modals/settings/SettingsModal.test.tsx",
"opendevin/events/action/action.py",
"opendevin/events/action/commands.py",
"frontend/src/i18n/translation.json",
"frontend/src/components/chat/ChatMessage.test.tsx",
"frontend/src/components/chat/ChatMessage.tsx",
"frontend/src/services/settings.ts",
"frontend/src/services/settings.test.ts",
"frontend/src/components/chat/Chat.tsx",
"frontend/src/components/chat/ChatInterface.tsx",
"frontend/src/services/session.test.ts",
"frontend/src/components/AgentStatusBar.tsx",
"opendevin/events/action/__init__.py",
"frontend/src/components/modals/settings/SettingsModal.tsx",
"opendevin/core/schema/config.py",
"opendevin/core/schema/agent.py",
"frontend/src/types/AgentState.tsx",
"frontend/src/components/modals/settings/SettingsForm.test.tsx",
"frontend/src/services/actions.ts",
"opendevin/core/schema/observation.py",
"opendevin/controller/agent_controller.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
scrapy | scrapy | c8f3d07e86dd41074971b5423fb932c2eda6db1e | https://github.com/scrapy/scrapy/issues/3370 | AttributeError from contract errback | When running a contract with a URL that returns non-200 response, I get the following:
```
2018-08-09 14:40:23 [scrapy.core.scraper] ERROR: Spider error processing <GET https://www.bureauxlocaux.com/annonce/a-louer-bureaux-a-louer-a-nantes--1289-358662> (referer: None)
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 653, in _runCallbacks
current.result = callback(current.result, *args, **kw)
File "/usr/local/lib/python3.6/site-packages/scrapy/contracts/__init__.py", line 89, in eb_wrapper
results.addError(case, exc_info)
File "/usr/local/lib/python3.6/unittest/runner.py", line 67, in addError
super(TextTestResult, self).addError(test, err)
File "/usr/local/lib/python3.6/unittest/result.py", line 17, in inner
return method(self, *args, **kw)
File "/usr/local/lib/python3.6/unittest/result.py", line 115, in addError
self.errors.append((test, self._exc_info_to_string(err, test)))
File "/usr/local/lib/python3.6/unittest/result.py", line 186, in _exc_info_to_string
exctype, value, tb, limit=length, capture_locals=self.tb_locals)
File "/usr/local/lib/python3.6/traceback.py", line 470, in __init__
exc_value.__cause__.__traceback__,
AttributeError: 'getset_descriptor' object has no attribute '__traceback__'
```
Here is how `exc_info` looks like:
```
(HttpError('Ignoring non-200 response',), <class 'scrapy.spidermiddlewares.httperror.HttpError'>, <traceback object at 0x7f4bdca1d948>)
```
| null | https://github.com/scrapy/scrapy/pull/3371 | null | {'base_commit': 'c8f3d07e86dd41074971b5423fb932c2eda6db1e', 'files': [{'path': 'scrapy/contracts/__init__.py', 'status': 'modified', 'Loc': {"('ContractsManager', 'eb_wrapper', 85)": {'mod': [87]}}}, {'path': 'tests/test_contracts.py', 'status': 'modified', 'Loc': {'(None, None, None)': {'add': [2, 4]}, "('ContractsManagerTest', 'test_scrapes', 163)": {'add': [187]}}}]} | [] | [] | [] | {
"iss_type": "1",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/contracts/__init__.py"
],
"doc": [],
"test": [
"tests/test_contracts.py"
],
"config": [],
"asset": []
} | 1 | |
scrapy | scrapy | b337c986ca1188f4b26d30c9ae4bb7ff457ed505 | https://github.com/scrapy/scrapy/issues/5811 | bug
good first issue | `BaseSettings.setdefault` does nothing | ### Description
Calling `setdefault` method of class `BaseSettings` does nothing.
### Steps to Reproduce
```python
from scrapy.settings import BaseSettings
settings = BaseSettings()
stored = settings.setdefault('key', 'value')
print(stored) # prints None
print(settings.copy_to_dict()) # prints empty dictionary
```
**Expected behavior:**
`settings.setdefault(key, default)` must work as described in `MutableMapping` interface: set `default` to `settings[key]` and return `default` if `key` is not present, otherwise return `settings[key]`.
**Actual behavior:**
`settings.setdefault(key, default)` does nothing regardless of holding `key` or not.
**Reproduces how often:** 100%
### Versions
Scrapy : 2.7.1
lxml : 4.8.0.0
libxml2 : 2.9.12
cssselect : 1.1.0
parsel : 1.6.0
w3lib : 1.22.0
Twisted : 22.4.0
Python : 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]
pyOpenSSL : 22.0.0 (OpenSSL 3.0.3 3 May 2022)
cryptography : 37.0.2
Platform : Windows-10-10.0.19044-SP0
### Additional context
`BaseSettings` explicitly inherits from `MutableMapping` and does not redefine `setdefault` method. Thus, it uses base implementation:
```python
def setdefault(self, key, default=None):
'D.setdefault(k[,d]) -> D.get(k,d), also set D[k]=d if k not in D'
try:
return self[key]
except KeyError:
self[key] = default
return default
```
Base implementation refers to `self[key]` which is in fact `self.__getitem__[key]`. `BaseSettings` has own `__getitem__` implementation:
```python
def __getitem__(self, opt_name):
if opt_name not in self:
return None
return self.attributes[opt_name].value
```
And here is the root of the problem: when passed `key` is not present, `__getitem__` returns `None`, and `setdefault` follows.
**Solution**
Implement own `setdefault` method. An example with matching signature:
```python
def setdefault(self, opt_name, default=None):
if opt_name not in self:
self.set(opt_name, default)
return default
return self.attributes[opt_name].value
```
`priority='project'` argument can be added although this changes signature.
Other way is to inherit from `Mapping` instead of `MutableMapping` if this method and other base methods are redundant.
**Current workaround**
Convert `BaseSettings` object to a dictionary and only then use `setdefault`. | null | https://github.com/scrapy/scrapy/pull/5821 | null | {'base_commit': 'b337c986ca1188f4b26d30c9ae4bb7ff457ed505', 'files': [{'path': 'scrapy/settings/__init__.py', 'status': 'modified', 'Loc': {"('BaseSettings', None, 56)": {'add': [295]}}}, {'path': 'tests/test_settings/__init__.py', 'status': 'modified', 'Loc': {"('BaseSettingsTest', None, 64)": {'add': [67]}}}]} | [] | [] | [] | {
"iss_type": "2",
"iss_reason": "1",
"loc_way": "pr",
"loc_scope": null,
"info_type": null
} | {
"code": [
"scrapy/settings/__init__.py",
"tests/test_settings/__init__.py"
],
"doc": [],
"test": [],
"config": [],
"asset": []
} | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.