repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
tflearn/tflearn | data-science | 943 | cross_product term | Anyone know how to use cross_product in tflearn?
Or how to transform tensorflow indicator to tensor for tflearn ?
```
tf.feature_column.indicator_column(tf.feature_column.categorical_column_with_vocabulary_list(cc,cc_size[1]))
``` | open | 2017-10-26T16:52:06Z | 2017-10-26T16:52:06Z | https://github.com/tflearn/tflearn/issues/943 | [] | whizzalan | 0 |
whitphx/streamlit-webrtc | streamlit | 1,046 | ReferenceError: weakly-referenced object no longer exists | Updated to 0.43.3 but I still get the same error on Mac M1 Pro | closed | 2022-09-01T12:31:35Z | 2022-09-05T12:03:18Z | https://github.com/whitphx/streamlit-webrtc/issues/1046 | [] | creeksflowing | 12 |
home-assistant/core | asyncio | 140,995 | HomeKit Integration: Missing and Non-Functional Entities After Update (Smartmi Air Purifier P2) | ### The problem
Hello! First of all, thank you for your hard work on the HomeKit integration. I need your help with an issue that appeared after updating Home Assistant.
After the update, several entities in the HomeKit integration disappeared, and new ones appeared, but they are not working. Here are the details:
**Device:**
Name: Smartmi Air Purifier P2
Model: ZMKQJHQP21
Manufacturer: Beijing Smartmi Electronic Technology Co., Ltd.
**Previously Working Entities:**
sensor.smartmi_air_purifier_p2_air_purifier_status
fan.smartmi_air_purifier_p2
sensor.smartmi_air_purifier_p2_pm2_5_density
sensor.smartmi_air_purifier_p2_pm10_density
Additionally, there were entities for operation modes and filter status.
**Current Situation:**
After the update, the following entities are present but non-functional:
fan.smartmi_air_purifier_p2 - Unavailable
sensor.smartmi_air_purifier_p2_air_purifier_status - Unavailable
sensor.smartmi_air_purifier_p2_air_quality - Shows values from 1 to 5, but other entities are missing or not working.
### What version of Home Assistant Core has the issue?
2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant Supervised
### Integration causing the issue
homekit_controller
### Link to integration documentation on our website
_No response_
### Diagnostics information
{
"data": {
"config-entry": {
"title": "Smartmi Air Purifier P2",
"version": 1,
"data": {
"AccessoryIP": "**REDACTED**",
"AccessoryIPs": [
"192.168.33.61"
],
"AccessoryLTPK": "xxxxxxxxx713709b0d957456bf18b61cbd418b027fb66de6",
"AccessoryPairingID": "xx:2x:xx:5x:3x:xx",
"AccessoryPort": 80,
"Connection": "IP",
"iOSDeviceLTPK": "xxxxxxe0f48ea6b6a4e3de32073a7d",
"iOSDeviceLTSK": "**REDACTED**",
"iOSPairingId": "xxxxxxx48e6-ba29-5f4be17810bd"
}
},
"entity-map": [
{
"aid": 1,
"services": [
{
"iid": 1,
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"characteristics": [
{
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"iid": 2,
"perms": [
"pw"
],
"format": "bool",
"description": "Identify"
},
{
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"iid": 3,
"perms": [
"pr"
],
"format": "string",
"value": "Beijing Smartmi Electronic Technology Co., Ltd.",
"description": "Manufacturer",
"maxLen": 64
},
{
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"iid": 4,
"perms": [
"pr"
],
"format": "string",
"value": "ZMKQJHQP21",
"description": "Model",
"maxLen": 64
},
{
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"iid": 5,
"perms": [
"pr"
],
"format": "string",
"value": "Smartmi Air Purifier P2",
"description": "Name",
"maxLen": 64
},
{
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"iid": 6,
"perms": [
"pr"
],
"format": "string",
"value": "**REDACTED**",
"description": "Serial Number",
"maxLen": 64
},
{
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"iid": 7,
"perms": [
"pr"
],
"format": "string",
"value": "3.0.2",
"description": "Firmware Revision",
"maxLen": 64
}
]
},
{
"iid": 15,
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"characteristics": [
{
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"iid": 16,
"perms": [
"pr",
"ev"
],
"format": "uint8",
"value": 1,
"description": "Air Quality",
"minValue": 0,
"maxValue": 5,
"minStep": 1
},
{
"type": "0000xxx-xxxx-xxxx-xxxxx-xxxxxxxxx",
"iid": 17,
"perms": [
"pr"
],
"format": "string",
"value": "My air quality sensor",
"description": "Name",
"maxLen": 64
}
]
}
]
}
],
"device": {
"name": "Smartmi Air Purifier P2",
"model": "ZMKQJHQP21",
"manfacturer": "Beijing Smartmi Electronic Technology Co., Ltd.",
"sw_version": "3.0.2",
"entities": [
{
"original_name": "Smartmi Air Purifier P2 Air Purifier Status",
"original_device_class": "enum",
"entity_category": "diagnostic",
"state": {
"entity_id": "sensor.smartmi_air_purifier_p2_air_purifier_status",
"state": "unavailable",
"attributes": {
"restored": true,
"options": [
"inactive",
"idle",
"purifying"
],
"device_class": "enum",
"friendly_name": "Smartmi Air Purifier P2 Air Purifier Status",
"supported_features": 0
}
}
},
{
"original_name": "Smartmi Air Purifier P2 Air Quality",
"original_device_class": "aqi",
"state": {
"entity_id": "sensor.smartmi_air_purifier_p2_air_quality",
"state": "1",
"attributes": {
"state_class": "measurement",
"device_class": "aqi",
"friendly_name": "Smartmi Air Purifier P2 Air Quality"
}
}
},
{
"original_name": "Smartmi Air Purifier P2 Identify",
"original_device_class": "identify",
"entity_category": "diagnostic",
"state": {
"entity_id": "button.smartmi_air_purifier_p2_identify",
"state": "2025-03-17T07:35:22.947612+00:00",
"attributes": {
"device_class": "identify",
"friendly_name": "Smartmi Air Purifier P2 Identify"
}
}
},
{
"original_name": "Smartmi Air Purifier P2 My air purifier",
"original_device_class": null,
"state": {
"entity_id": "fan.smartmi_air_purifier_p2",
"state": "unavailable",
"attributes": {
"restored": true,
"friendly_name": "purifier",
"supported_features": 48
}
}
}
]
}
}
}
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-20T14:01:02Z | 2025-03-20T17:27:04Z | https://github.com/home-assistant/core/issues/140995 | [
"integration: homekit"
] | FleshZLO | 1 |
tqdm/tqdm | jupyter | 1,438 | bar_format typeerror for {rate:.3f} format | ## my code
```py
from tqdm.auto import tqdm , trange
# from tqdm.notebook import tqdm
from time import sleep
with tqdm(total=10,
desc="desc", bar_format="[{desc}: {percentage:3.0f}%] |{bar}| [{n_fmt}/{total_fmt}] [{elapsed}<<{remaining}] [{rate:.3f} {unit}/s] "
) as t:
for i in range(10):
sleep(0.1)
t.update()
```
## env
environment, where applicable:
```python
import tqdm, sys
print(tqdm.__version__, sys.version, sys.platform)
```
4.64.0 3.9.9 (tags/v3.9.9:ccb0e6a, Nov 15 2021, 18:08:50) [MSC v.1929 64 bit (AMD64)] win32
## traceback
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "D:\programs\python\python39\lib\site-packages\tqdm\std.py", line 1109, in __init__
self.refresh(lock_args=self.lock_args)
File "D:\programs\python\python39\lib\site-packages\tqdm\std.py", line 1361, in refresh
self.display()
File "D:\programs\python\python39\lib\site-packages\tqdm\std.py", line 1509, in display
self.sp(self.__str__() if msg is None else msg)
File "D:\programs\python\python39\lib\site-packages\tqdm\std.py", line 1165, in __str__
return self.format_meter(**self.format_dict)
File "D:\programs\python\python39\lib\site-packages\tqdm\std.py", line 524, in format_meter
nobar = bar_format.format(bar=full_bar, **format_dict)
TypeError: unsupported format string passed to NoneType.__format__
## other
just use rate is ok
- [*] I have marked all applicable categories:
+ [*] exception-raising bug
+ [ ] visual output bug
- [*] I have visited the [source website], and in particular
read the [known issues]
- [*] I have searched through the [issue tracker] for duplicates
- [*] I have mentioned version numbers, operating system and
[source website]: https://github.com/tqdm/tqdm/
[known issues]: https://github.com/tqdm/tqdm/#faq-and-known-issues
[issue tracker]: https://github.com/tqdm/tqdm/issues?q=
| open | 2023-03-02T13:51:56Z | 2025-02-09T21:54:41Z | https://github.com/tqdm/tqdm/issues/1438 | [] | ZX1209 | 1 |
wkentaro/labelme | computer-vision | 318 | libpng warning: iCCP: known incorrect sRGB profile | Hi, what does this warning mean? Although it doesn't affect my use.

| closed | 2019-02-14T07:43:52Z | 2019-04-27T01:59:00Z | https://github.com/wkentaro/labelme/issues/318 | [] | lck1201 | 0 |
darrenburns/posting | rest-api | 160 | Scripting get_variable method not implemented | The docs [Scripting](https://posting.sh/guide/scripting/) contains a reference to the `posting.get_variable()` method, but it doesn't look like this has been implemented (yet).
As a workaround, looks like we could use `vars = posting.variables` and process the dict returned.
e.g.
```python
from posting import Posting
import json
def setup(posting: Posting) -> None:
auth_token="auth_token"
# Capture the variables currently set
vars = posting.variables
# Check to see if auth token is set
if not auth_token in vars:
print(f"{auth_token} not found")
posting.set_variable(auth_token, "1234567890")
# Debug - dump the updated variables
vars = posting.variables
print(f"Vars: {json.dumps(vars, sort_keys=True, indent=4)}")
# Debug to see the var is set.
print(f"Auth: {vars[auth_token]}")
```
| closed | 2024-12-31T13:58:53Z | 2025-03-02T18:07:40Z | https://github.com/darrenburns/posting/issues/160 | [
"bug"
] | zDavidB | 2 |
axnsan12/drf-yasg | django | 691 | No utf-8 symbols support for generated JSON and YAML | When trying to generate files with ^swagger(?P<format>\.json|\.yaml)$ it gives no support for utf-8 symbols, though defined description my content such symbols | open | 2021-01-14T12:25:29Z | 2025-03-07T12:13:27Z | https://github.com/axnsan12/drf-yasg/issues/691 | [
"triage"
] | ProstoMaxim | 1 |
voila-dashboards/voila | jupyter | 1,341 | Persistent loop of Matplotlib figure animation results in flickering output when run in a thread | ## Description
The [Mesa](https://github.com/projectmesa/mesa) agent-based modeling library is looking to replace its self-hosted Tornado-based visualization server with Voilà. I have made a prototype in https://github.com/rht/mesa-examples/tree/voila. However, I encountered the flickering issue reported in #431. To summarize, it is like running the Game of Life simulation with play and stop buttons. The play and stop work, except that the display flickers.
The long-running loop: https://github.com/rht/mesa-examples/blob/99a68386226f3fe5be3953d25a84bd92f8b7065c/examples/boltzmann_wealth_model/run_voila.py#L161-L170. If I run the loop without threading or multiprocessing, it runs just fine. Each loop lasts for about 300 ms. My hypothesis is that the solutions in #431 does not apply because there is only 1 plot being constantly re-rendered, whereas in my case, I have 3 objects being constantly rerendered:
- the time series plot of the simulation
- the imshow heatmap view of the agents
- the elapsed of each step, displayed in a `widgets.Output`
https://github.com/rht/mesa-examples/blob/99a68386226f3fe5be3953d25a84bd92f8b7065c/examples/boltzmann_wealth_model/run_voila.py#L154-L159
I have tried the `plt.draw()` as recommended in https://github.com/voila-dashboards/voila/issues/431#issuecomment-542390982, but it didn't work out. I have also tried adding `clear_output(wait=True)`, and it didn't work out.
I haven't tried on JupyterLab yet, and have been focusing to make it work with `voila --no-browser --debug`. I apologize in advance if this issue is not concise nor self-contained.
## Context
<!--Complete the following for context, and add any other relevant context-->
- voila version 0.4.0
- Operating System and version: NixOS 23.05
- Browser and version: Brave v1.52.126 | open | 2023-06-26T10:06:20Z | 2024-02-12T23:25:17Z | https://github.com/voila-dashboards/voila/issues/1341 | [
"bug"
] | rht | 6 |
microsoft/nlp-recipes | nlp | 625 | [ASK] transformers.abstractive_summarization_bertsum.py not importing transformers | ### Description
I run in Google Colab the following code
```
!pip install --upgrade
!pip install -q git+https://github.com/microsoft/nlp-recipes.git
!pip install jsonlines
!pip install pyrouge
!pip install scrapbook
import os
import shutil
import sys
from tempfile import TemporaryDirectory
import torch
import nltk
from nltk import tokenize
import pandas as pd
import pprint
import scrapbook as sb
nlp_path = os.path.abspath("../../")
if nlp_path not in sys.path:
sys.path.insert(0, nlp_path)
from utils_nlp import models
from utils_nlp.models import transformers
from utils_nlp.models.transformers.abstractive_summarization_bertsum \
import BertSumAbs, BertSumAbsProcessor
```
It breaks on the last line and I get the following error
```
/usr/local/lib/python3.7/dist-packages/utils_nlp/models/transformers/abstractive_summarization_bertsum.py in <module>()
15 from torch.utils.data.distributed import DistributedSampler
16 from tqdm import tqdm
---> 17 from transformers import AutoTokenizer, BertModel
18
19 from utils_nlp.common.pytorch_utils import (
ModuleNotFoundError: No module named 'transformers'
```
In summary, the code in abstractive_summarization_bertsum.py doesn't resolve transformers where it is located in the transformer folder. Is it something to be fixed on your side? | open | 2022-01-11T10:21:31Z | 2022-02-17T23:23:40Z | https://github.com/microsoft/nlp-recipes/issues/625 | [] | neqkir | 1 |
Gerapy/Gerapy | django | 77 | 上传的项目py文件编辑不了 | 在项目的py文件中不能包含中文,我把全部中文换成英文后,可以编辑文件了,希望修复下。 | open | 2018-08-01T07:14:42Z | 2020-07-24T08:19:53Z | https://github.com/Gerapy/Gerapy/issues/77 | [] | ghost | 1 |
jina-ai/serve | deep-learning | 5,402 | Bind to `host` instead of `default_host` | **Describe the bug**
Flow accepts `host` parameter because it inherits from client and gateway but is confusing as shown in #5401 | closed | 2022-11-17T08:59:04Z | 2022-11-21T15:43:42Z | https://github.com/jina-ai/serve/issues/5402 | [
"area/community"
] | JoanFM | 5 |
psf/requests | python | 6,080 | How to uinstall requests library using setup.py? | We are in a factory environment where we cannot use pip.
We installed request library using python install setup.py.
Is it possible to uninstall requests library using setup.py. Please share the command. | closed | 2022-03-07T10:57:39Z | 2023-03-08T00:03:28Z | https://github.com/psf/requests/issues/6080 | [] | ashokchandran | 1 |
Tanuki/tanuki.py | pydantic | 35 | Align statements do not support lists or tuples | The following align statements error out with inputs Lists or tuples
```
@Monkey.patch
def classify_sentiment(input: List[str]) -> Literal['Good', 'Bad', 'Neutral']: # Multi-class classification
"""
Determine if the input is positive, negative or neutral sentiment
"""
@Monkey.align
def align():
assert classify_sentiment(["I thought the ending was awesome"]) == 'Good'
assert classify_sentiment(["The acting was horrendous"]) == 'Bad'
assert classify_sentiment(["It was a dark and stormy night"]) == 'Neutral'
```
```
@Monkey.patch
def classify_sentiment(input: tuple) -> Literal['Good', 'Bad', 'Neutral']: # Multi-class classification
"""
Determine if the input is positive, negative or neutral sentiment
"""
@Monkey.align
def align():
assert classify_sentiment(("I thought the ending was awesome", "It was really good")) == 'Good'
``` | closed | 2023-11-03T18:32:28Z | 2023-11-08T14:56:49Z | https://github.com/Tanuki/tanuki.py/issues/35 | [] | MartBakler | 0 |
tflearn/tflearn | data-science | 1,074 | tflearn does't work when using higher TensorFlow | I found that tflearn does't work when using higher TensorFlow.
Is it a bug?
My Tensorflow version: 1.8.0
TfLearn version: 0.3.2
OS: Win10 x64
Python version: 3.6.6 | open | 2018-07-16T06:55:25Z | 2018-07-16T06:55:25Z | https://github.com/tflearn/tflearn/issues/1074 | [] | polar99 | 0 |
koxudaxi/fastapi-code-generator | fastapi | 328 | [] in query parameter name generates code that cannot be formatted | When you have `[]` as prarameter name it fails to generate correct code. See this part of OpenAPI schema:
```yaml
parameters:
- in: query
name: color[]
schema:
type: array
items:
type: string
```
Possible solutions:
1. Correct schema to not use `[]`
2. Patch template to strip `[]` but handle possible name collisions like `color` and `color[]`
3. Add some `--no-code-format` flag to allow generate invalid code to fix it later manually
What do you think about it?
| open | 2023-03-01T23:23:32Z | 2023-03-01T23:23:32Z | https://github.com/koxudaxi/fastapi-code-generator/issues/328 | [] | Skyross | 0 |
chainer/chainer | numpy | 8,195 | ChainerX take allows OOB access | I think we should not allow this?
```
>>> x = chx.array([1,2])
>>> y = x.take(chx.array([3]), axis=0)
>>> y
array([2], shape=(1,), dtype=int64, device='native:0')
```
numpy raises an exception for this. | closed | 2019-09-29T04:36:42Z | 2019-10-10T03:52:57Z | https://github.com/chainer/chainer/issues/8195 | [
"pr-ongoing",
"ChainerX"
] | shinh | 1 |
twelvedata/twelvedata-python | matplotlib | 7 | [Bug] Technical Indicator Plotly | Hello,
I ran the 'static' (chart) example code from git without error using python 3.5.
The chart appeared in my browser.
However, the Technical Indicators are not appearing on the chart. (Compared to the image of the char on the git and pypi pages)
It seems that either there is a bug or,
Do I need to write additional code (presumably using plotly modules) in order to configure the chart to properly display the technical indicators?
I'm on Ubuntu 16.04. Using Pycharm.
I had to use ts.show_plotly() to display the chart, otherwise the chart doesn't show.
I also tried on Python3.6 and 3.7 and neither seemed to make a difference.
Thanks for your help.
Matt | closed | 2020-04-26T05:54:30Z | 2020-04-26T16:28:56Z | https://github.com/twelvedata/twelvedata-python/issues/7 | [] | majikbyte | 2 |
jina-ai/clip-as-service | pytorch | 430 | Adding Bi-LSTM layer to word-level embeddings | Has anyone got any examples of them adding a classification layer (as per the Bert paper) for NER? | open | 2019-08-02T18:16:42Z | 2019-08-02T18:16:42Z | https://github.com/jina-ai/clip-as-service/issues/430 | [] | samjtozer | 0 |
kennethreitz/responder | flask | 45 | Probable bug in GraphQL JSON request handling | There seems to be a bug in parsing of JSON GraphQL requests.
Test case (disregard the fact that it would also fail if parsed successfully):
```python
def test_graphql_schema_json_query(api, schema):
api.add_route("/", schema)
r = api.session().post("http://;/", headers={"Accept": "json", "Content-type": "json"}, data={'query': '{ hello }'})
assert r.ok
```
Result on the latest `master`:
```
tests/test_responder.py:123:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
.env/lib/python3.7/site-packages/requests/sessions.py:559: in post
return self.request('POST', url, data=data, json=json, **kwargs)
.env/lib/python3.7/site-packages/starlette/testclient.py:312: in request
json=json,
.env/lib/python3.7/site-packages/requests/sessions.py:512: in request
resp = self.send(prep, **send_kwargs)
.env/lib/python3.7/site-packages/requests/sessions.py:622: in send
r = adapter.send(request, **kwargs)
.env/lib/python3.7/site-packages/starlette/testclient.py:159: in send
raise exc from None
.env/lib/python3.7/site-packages/starlette/testclient.py:156: in send
loop.run_until_complete(connection(receive, send))
/usr/local/Cellar/python/3.7.0/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/base_events.py:568: in run_until_complete
return future.result()
responder/api.py:71: in asgi
resp = await self._dispatch_request(req)
responder/api.py:112: in _dispatch_request
self.graphql_response(req, resp, schema=view)
responder/api.py:200: in graphql_response
query = self._resolve_graphql_query(req)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
req = <responder.models.Request object at 0x105f20c18>
@staticmethod
def _resolve_graphql_query(req):
if "json" in req.mimetype:
> return req.json()["query"]
E AttributeError: 'Request' object has no attribute 'json'
```
I tried fixing it myself, but got tangled in the web of `async/await`-s and ended up breaking other stuff 😅
I will try again later unless someone else wants to pick it up. | closed | 2018-10-14T19:14:04Z | 2018-10-15T07:32:21Z | https://github.com/kennethreitz/responder/issues/45 | [] | artemgordinskiy | 2 |
twopirllc/pandas-ta | pandas | 381 | VWAP that matched values of TradingView VWAP indicator (anchor) | Running
python: 3.8.5
pandas_ta version: 0.3.14b0
Description:
Trying to get VWAP that matched values of TradingView indicator at this link.
https://www.tradingview.com/support/solutions/43000502018-volume-weighted-average-price-vwap/
I think the key to matching this TradingView indicator is its "Anchor Setting", default is "session".
I want pandas_ta VWAP that matches TradingView indicator 30 minute chart.
The "Timeseries Offset Aliases" documentation shows: T,min = minutely frequency
I am not sure what combination of pandas_ta "anchor" and "offset" settting can be used to match TradingView VWAP on 30 minute chart.
Also, I am not sure if pandas_ta can match this TradingView indicator.
Code I have tried:
```python
df.set_index(pd.DatetimeIndex(df["date"]), inplace=True)
vwap = df.ta.vwap(anchor="T", offset=30, append=True)
```
I can provide screenshots if necessary.
| closed | 2021-08-31T05:37:20Z | 2021-09-07T01:07:04Z | https://github.com/twopirllc/pandas-ta/issues/381 | [
"info"
] | slhawk98 | 12 |
dbfixtures/pytest-postgresql | pytest | 574 | Maintain v3.x line with psycopg2 support | ### What action do you want to perform
Since `psycopg` 3 isn't slated to for GA with [SQLAlchemy until their 2.0 release](https://github.com/sqlalchemy/sqlalchemy/issues/6842), most users of SQLAlchemy are still using v1.4 with `psycopg2`. For those of us on SQLAlchemy v1.4, it doesn't really make sense to write tests with this fixture package that requires `psycopg` 3. Would you be open to maintaining the v3.x line please so that we can continue to get feature upgrades and fixes without the `psycopg2` requirement. I'd be happy to help maintain that branch if so.
### What are the results
### What are the expected results | open | 2022-03-07T14:10:22Z | 2022-03-08T12:54:43Z | https://github.com/dbfixtures/pytest-postgresql/issues/574 | [
"question"
] | winglian | 3 |
anselal/antminer-monitor | dash | 31 | Reboot when detected chips (Os) =/= 180 | I have a couple of weird D3s that sometimes say 175-179 chips are Os and the rest are Xs and are fixed with a simple reboot on their static ip page, it'd be great to have a built in option to automatically reboot the miner if any Xs are detected. | closed | 2017-11-27T08:36:39Z | 2017-11-27T20:32:49Z | https://github.com/anselal/antminer-monitor/issues/31 | [
":dancing_men: duplicate"
] | ckl33 | 4 |
Nemo2011/bilibili-api | api | 614 | [提问] 出现风控校验失败信息 | **Python 版本:** 3.12.1
**模块版本:** 16.1.1
**运行环境:** Windows
<!-- 务必提供模块版本并确保为最新版 -->
---
```
user_info = await bilibili_api.user.User(uid=uid, credential=credential).get_user_info()
bilibili_api.exceptions.ResponseCodeException.ResponseCodeException: 接口返回错误代码:-352,信息:风控校验失败。
{'code': -352, 'message': '风控校验失败', 'ttl': 1, 'data': {'v_voucher': 'voucher_121bba47-4c20-4250-998c-454ef9a0a8cc'}}
```
在获取b站用户个人信息的时候出现了风控提示,代码中已传入了credential
重启程序后貌似正常了,不确定什么时候再触发此风控 | closed | 2023-12-28T05:18:21Z | 2024-01-09T11:51:17Z | https://github.com/Nemo2011/bilibili-api/issues/614 | [
"need debug info",
"anti-spider"
] | iconFehu | 0 |
facebookresearch/fairseq | pytorch | 5,090 | NLLB License | ## ❓ Questions and Help
Here is the NLLB model's license, https://github.com/facebookresearch/fairseq/blob/nllb/LICENSE.model.md
Can we use NLLB model output (translation from language X to language Y) to train a model and release that model under a Commercially permitted license (i.e., Apache 2.0)? I understand the model license is for `Attribution-NonCommercial 4.0 International` but to generate the model output, we are actually paying for the compute hours. I understand that we cannot use model weights for commercial purposes. But what about the output generated by the model?
| open | 2023-04-24T18:34:04Z | 2023-05-19T08:45:26Z | https://github.com/facebookresearch/fairseq/issues/5090 | [
"question",
"needs triage"
] | sbmaruf | 2 |
Farama-Foundation/Gymnasium | api | 742 | [Bug Report] gymnasium.error.NamespaceNotFound: Namespace gym_examples not found. | ### Describe the bug
I've followed https://gymnasium.farama.org/tutorials/gymnasium_basics/environment_creation/#creating-a-package.
I have successfully registered the environment, but when I try to use that environment, I receive the error mentioned in the title. I have seen a similar issue here (https://github.com/Farama-Foundation/Gymnasium/issues/400), but all of my code is using Gymnasium (you can see it on my GitHub). Did I do something wrong? If so, please help me.
My github code: https://github.com/NghiaPhamttk27/GridWorld/tree/main

### Code example
_No response_
### System info
_No response_
### Additional context
_No response_
### Checklist
- [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/Gymnasium/issues) in the repo
| closed | 2023-10-16T15:47:38Z | 2024-04-06T13:15:36Z | https://github.com/Farama-Foundation/Gymnasium/issues/742 | [
"bug"
] | NghiaPhamttk27 | 2 |
ultralytics/yolov5 | deep-learning | 13,402 | Feature map channel not same as what I defined | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Recently I'm working on some head detection works, and to deploy the model on devices with weak computation ability, I used a yoloface-500k model and tried to train it in yolov5 framework. The model yaml is defined as followed:
```yaml
nc: 1 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [4, 6, 7, 10, 11, 15]
- [16, 24, 33, 25, 26, 41]
- [47, 60, 83, 97, 141, 149]
backbone:
# [from, number, module, args]
# args: out_channels, size, stride
[
[-1, 1, Conv, [8, 3, 2]], # 0 [batch, 8, size/2, size/2]
[-1, 1, DWConv, [8, 3, 1]], # 1 [320]
[-1, 1, Conv, [4, 1, 1 ]], # 2 [320]
[-1, 1, Conv, [24, 1, 1]], # 3 [-1, 1, DWConv, [24, 3, 2]] # 4
[-1, 1, Conv, [6, 1, 1]], # 4
[-1, 1, Bottleneck3, [6]], # 5
[-1, 1, Conv, [36, 1, 1]], # 6
[-1, 1, DWConv, [36, 3, 2]], # 7 [160]
[-1, 1, Conv, [8, 1, 1]], # 8
[-1, 2, Bottleneck3, [8]], # 9
[-1, 1, Conv, [48, 1, 1]], # 10
[-1, 1, DWConv, [48, 3, 2]], # 11 [80]
[-1, 1, Conv, [16, 1, 1]], # 12
[-1, 3, Bottleneck3, [16]], # 13
[-1, 1, Conv, [96, 1, 1]], # 14
[-1, 1, DWConv, [96, 3, 1]], # 15
[-1, 1, Conv, [24, 1, 1]], # 16
[-1, 2, Bottleneck3, [24]], # 17
[-1, 1, Conv, [144, 1, 1]], # 18 [80]
[-1, 1, DWConv, [144, 3, 2]], # 19 [80] -> [40]
[-1, 1, Conv, [40, 1, 1]], # 20
[-1, 2, Bottleneck3, [40]], # 21 [batch, 40, size/16, size/16]
]
head: [
[-1, 1, Conv, [80, 1, 1]], # 22 [40]
[[-1, -4], 1, Concat, [1]], # 23 [batch, 224, size/16, size/16] [40]
[-1, 1, Conv, [48, 1, 1]], # 24
[-1, 1, DWConv, [48, 3, 1]], # 25
[-1, 1, Conv, [36, 1, 1]], # 26
[-1, 1, Conv, [18, 1, 1]], # 27 [batch, 18, size/8, size/8] -> [40]
[-5, 1, nn.Upsample, [None, 2, "nearest"]], # 28 [80]
[[-1, 11], 1, Concat, [1]], # 29 [80] ch = 272
[-1, 1, Conv, [24, 1, 1]], # 30
[-1, 1, DWConv, [24, 3, 1]], # 31
[-1, 1, Conv, [24, 1, 1]], # 32
[-1, 1, Conv, [18, 1, 1]], # 33 [batch, 18, 160, 160] -> [80]
[-5, 1, nn.Upsample, [None, 2, "nearest"]], # 34 [1, 272, 320, 320] -> [160]
[[-1, 7], 1, Concat, [1]], # 35
[-1, 1, Conv, [18, 1, 1]], # 36
[-1, 1, DWConv, [18, 3, 1]], # 37
[-1, 1, Conv, [24, 1, 1]], # 38
[-1, 1, Conv, [18, 1, 1]], # 39 [batch, 18, 320, 320] -> [160]
[[39, 33, 27], 1, Detect, [nc, anchors]],
]
```
The arrows in the file just denote the change on size I have made in this layer from a previous version, which is not important in this issue.
My problem is, As I defined in layer 27, 33 and 39, these three layers should output a 18 channel feature map, respectively. However, in my experiment, where I run `detect.py` with the .pt weight file I get after training, it turns out that the output of these layers are all 24 channels:
```txt
Layer 0: torch.Size([1, 8, 320, 320])
Layer 1: torch.Size([1, 8, 320, 320])
Layer 2: torch.Size([1, 8, 320, 320])
Layer 3: torch.Size([1, 24, 320, 320])
Layer 4: torch.Size([1, 8, 320, 320])
Layer 5: torch.Size([1, 8, 320, 320])
Layer 6: torch.Size([1, 40, 320, 320])
Layer 7: torch.Size([1, 40, 160, 160])
Layer 8: torch.Size([1, 8, 160, 160])
Layer 9: torch.Size([1, 8, 160, 160])
Layer 10: torch.Size([1, 48, 160, 160])
Layer 11: torch.Size([1, 48, 80, 80])
Layer 12: torch.Size([1, 16, 80, 80])
Layer 13: torch.Size([1, 16, 80, 80])
Layer 14: torch.Size([1, 96, 80, 80])
Layer 15: torch.Size([1, 96, 80, 80])
Layer 16: torch.Size([1, 24, 80, 80])
Layer 17: torch.Size([1, 24, 80, 80])
Layer 18: torch.Size([1, 144, 80, 80])
Layer 19: torch.Size([1, 144, 40, 40])
Layer 20: torch.Size([1, 40, 40, 40])
Layer 21: torch.Size([1, 40, 40, 40])
Layer 22: torch.Size([1, 80, 40, 40])
Layer 23: torch.Size([1, 224, 40, 40])
Layer 24: torch.Size([1, 48, 40, 40])
Layer 25: torch.Size([1, 48, 40, 40])
Layer 26: torch.Size([1, 40, 40, 40])
Layer 27: torch.Size([1, 24, 40, 40])
Layer 28: torch.Size([1, 224, 80, 80])
Layer 29: torch.Size([1, 272, 80, 80])
Layer 30: torch.Size([1, 24, 80, 80])
Layer 31: torch.Size([1, 24, 80, 80])
Layer 32: torch.Size([1, 24, 80, 80])
Layer 33: torch.Size([1, 24, 80, 80])
Layer 34: torch.Size([1, 272, 160, 160])
Layer 35: torch.Size([1, 312, 160, 160])
Layer 36: torch.Size([1, 24, 160, 160])
Layer 37: torch.Size([1, 24, 160, 160])
Layer 38: torch.Size([1, 24, 160, 160])
Layer 39: torch.Size([1, 24, 160, 160])
```
What makes it more weird is that, in my previous version, just as I mentioned above,
```yaml
nc: 1 # number of classes
depth_multiple: 1.0 # model depth multiple
width_multiple: 1.0 # layer channel multiple
anchors:
- [4, 6, 7, 10, 11, 15]
- [16, 24, 33, 25, 26, 41]
- [47, 60, 83, 97, 141, 149]
backbone:
# [from, number, module, args]
# args: out_channels, size, stride
[
[-1, 1, Conv, [8, 3, 2]], # 0 [batch, 8, size/2, size/2]
[-1, 1, DWConv, [8, 3, 1]], # 1 [320]
[-1, 1, Conv, [4, 1, 1 ]], # 2 [320]
[-1, 1, Conv, [24, 1, 1]], # 3 [-1, 1, DWConv, [24, 3, 2]] # 4
[-1, 1, Conv, [6, 1, 1]], # 4
[-1, 1, Bottleneck3, [6]], # 5
[-1, 1, Conv, [36, 1, 1]], # 6
[-1, 1, DWConv, [36, 3, 2]], # 7 [160]
[-1, 1, Conv, [8, 1, 1]], # 8
[-1, 2, Bottleneck3, [8]], # 9
[-1, 1, Conv, [48, 1, 1]], # 10
[-1, 1, DWConv, [48, 3, 2]], # 11 [80]
[-1, 1, Conv, [16, 1, 1]], # 12
[-1, 3, Bottleneck3, [16]], # 13
[-1, 1, Conv, [96, 1, 1]], # 14
[-1, 1, DWConv, [96, 3, 1]], # 15
[-1, 1, Conv, [24, 1, 1]], # 16
[-1, 2, Bottleneck3, [24]], # 17
[-1, 1, Conv, [144, 1, 1]], # 18 [80]
[-1, 1, DWConv, [144, 3, 2]], # 19 [40]
[-1, 1, Conv, [40, 1, 1]], # 20
[-1, 2, Bottleneck3, [40]], # 21 [batch, 40, size/16, size/16]
]
head: [
[-1, 1, Conv, [80, 1, 1]], # 22
[-1, 1, nn.Upsample, [None, 2, "nearest"]], # 23 [1, 80, 80, 80]
[[-1, -6], 1, Concat, [1]], # 24 [batch, 224, size/8, size/8]
[-1, 1, Conv, [48, 1, 1]], # 25
[-1, 1, DWConv, [48, 3, 1]], # 26
[-1, 1, Conv, [36, 1, 1]], # 27
[-1, 1, Conv, [18, 1, 1]], # 28 [batch, 18, size/8, size/8]
[-5, 1, nn.Upsample, [None, 2, "nearest"]], # 29
[[-1, 10], 1, Concat, [1]], # 30
[-1, 1, Conv, [24, 1, 1]], # 31
[-1, 1, DWConv, [24, 3, 1]], # 32
[-1, 1, Conv, [24, 1, 1]], # 33
[-1, 1, Conv, [18, 1, 1]], # 34 [batch, 18, 160, 160]
[-5, 1, nn.Upsample, [None, 2, "nearest"]], # 35 [1, 272, 320, 320]
[[-1, 6], 1, Concat, [1]], # 36
[-1, 1, Conv, [18, 1, 1]], # 37
[-1, 1, DWConv, [18, 3, 1]], # 38
[-1, 1, Conv, [24, 1, 1]], # 39
[-1, 1, Conv, [18, 1, 1]], # 40 [batch, 18, 320, 320]
[[40, 34, 28], 1, Detect, [nc, anchors]],
]
```
Which is a similar one except that the stride of some of the convolution layers are different with that in the latter version. By the way, I made these changes only to reduce the size of feature maps from 80, 160, 320 to 40, 80, 160 in order to have a better performance on the edge device. In this version, on the contrary, it outputs three feature maps with channel 18 each.
```txt
Layer 0: torch.Size([1, 8, 320, 320])
Layer 1: torch.Size([1, 8, 320, 320])
Layer 2: torch.Size([1, 8, 320, 320])
Layer 3: torch.Size([1, 24, 320, 320])
Layer 4: torch.Size([1, 8, 320, 320])
Layer 5: torch.Size([1, 8, 320, 320])
Layer 6: torch.Size([1, 40, 320, 320])
Layer 7: torch.Size([1, 40, 160, 160])
Layer 8: torch.Size([1, 8, 160, 160])
Layer 9: torch.Size([1, 8, 160, 160])
Layer 10: torch.Size([1, 48, 160, 160])
Layer 11: torch.Size([1, 48, 80, 80])
Layer 12: torch.Size([1, 16, 80, 80])
Layer 13: torch.Size([1, 16, 80, 80])
Layer 14: torch.Size([1, 96, 80, 80])
Layer 15: torch.Size([1, 96, 80, 80])
Layer 16: torch.Size([1, 24, 80, 80])
Layer 17: torch.Size([1, 24, 80, 80])
Layer 18: torch.Size([1, 144, 80, 80])
Layer 19: torch.Size([1, 144, 40, 40])
Layer 20: torch.Size([1, 40, 40, 40])
Layer 21: torch.Size([1, 40, 40, 40])
Layer 22: torch.Size([1, 80, 40, 40])
Layer 23: torch.Size([1, 80, 80, 80])
Layer 24: torch.Size([1, 224, 80, 80])
Layer 25: torch.Size([1, 48, 80, 80])
Layer 26: torch.Size([1, 48, 80, 80])
Layer 27: torch.Size([1, 40, 80, 80])
Layer 28: torch.Size([1, 18, 80, 80])
Layer 29: torch.Size([1, 224, 160, 160])
Layer 30: torch.Size([1, 272, 160, 160])
Layer 31: torch.Size([1, 24, 160, 160])
Layer 32: torch.Size([1, 24, 160, 160])
Layer 33: torch.Size([1, 24, 160, 160])
Layer 34: torch.Size([1, 18, 160, 160])
Layer 35: torch.Size([1, 272, 320, 320])
Layer 36: torch.Size([1, 312, 320, 320])
Layer 37: torch.Size([1, 18, 320, 320])
Layer 38: torch.Size([1, 18, 320, 320])
Layer 39: torch.Size([1, 24, 320, 320])
Layer 40: torch.Size([1, 18, 320, 320])
```
I wonder what makes the output channel different? It seems that the yolov3 framework has modified the last several layers of the new model automatically... Since the output channel of last four layers, in my design is 18, 18, 24, 24. However in the first txt file I showed above, it's 24, 24, 24, 24. Why does this change take place?
### Additional
_No response_ | closed | 2024-11-07T06:27:23Z | 2024-11-08T20:26:33Z | https://github.com/ultralytics/yolov5/issues/13402 | [
"question",
"detect"
] | tobymuller233 | 3 |
syrupy-project/syrupy | pytest | 116 | Syrupy assertion diff does not show missing carriage return | **Describe the bug**
Syrupy assertion diff does not show missing carriage return.
**To Reproduce**
Add this test.
```python
def test_example(snapshot):
assert snapshot == "line 1\r\nline 2"
```
Run `pytest --snapshot-update`.
Remove the `\r` from the string in the test case so you get:
```python
def test_example(snapshot):
assert snapshot == "line 1\nline 2"
```
Run `pytest`. The test will fail with a useless diff of "...".
**Expected behavior**
Syrupy should output each line with the missing carriage return, with some indication of what's missing.
**Additional context**
Syrupy v0.3.1
#113 fixed the bug where carriage returns were not being serialized. This issue addresses the missing functionality in the snapshot assertion diff reporter.
| closed | 2020-01-15T14:40:21Z | 2020-03-08T04:23:00Z | https://github.com/syrupy-project/syrupy/issues/116 | [
"bug",
"released"
] | noahnu | 3 |
apify/crawlee-python | automation | 311 | Document tiered proxies | closed | 2024-07-16T07:12:48Z | 2024-07-18T15:48:15Z | https://github.com/apify/crawlee-python/issues/311 | [
"documentation",
"t-tooling"
] | vdusek | 0 | |
fastapi/sqlmodel | fastapi | 208 | Pylance / VSCode cannot find sqlmodel typings correctly | ### First Check
- [X] I added a very descriptive title to this issue.
- [X] I used the GitHub search to find a similar issue and didn't find it.
- [X] I searched the SQLModel documentation, with the integrated search.
- [X] I already searched in Google "How to X in SQLModel" and didn't find any information.
- [X] I already read and followed all the tutorial in the docs and didn't find an answer.
- [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic).
- [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy).
### Commit to Help
- [X] I commit to help with one of those options 👆
### Example Code
```python
from fastapi import FastAPI
from model import Hero, User
from sqlmodel import create_engine, SQLModel, Session, select
app = FastAPI()
engine = create_engine("sqlite:///database.db")
SQLModel.metadata.create_all(engine)
@app.get("/")
async def read_root():
hero_1 = Hero(name="Deadpond", secret_name="Dive Wilson")
with Session(engine) as session:
session.add(hero_1)
session.commit()
session.refresh(hero_1)
return hero_1
```
### Description
<img width="1155" alt="Screen Shot 2021-12-29 at 2 58 04 PM" src="https://user-images.githubusercontent.com/260667/147709474-1613074e-c983-4736-ab5c-9b0d3fd8fad9.png">
For some reason, all of sqlmodel objects are not analyzed correctly. I have tried the same file in PyCharm and it works.
This _might_ be an issue with Pylance and not a bug on sqlmodel. I am hoping others might know of a solution.
### Operating System
macOS
### Operating System Details
_No response_
### SQLModel Version
0.0.6
### Python Version
3.10.1
### Additional Context
❯ pdm list
Package Version Location
----------------- -------- --------
anyio 3.4.0
asgiref 3.4.1
asttokens 2.0.5
black 21.12b0
click 8.0.3
devtools 0.8.0
executing 0.8.2
fastapi 0.70.1
greenlet 1.1.2
h11 0.12.0
httptools 0.3.0
idna 3.3
mypy 0.930
mypy-extensions 0.4.3
pathspec 0.9.0
platformdirs 2.4.1
pydantic 1.9.0a2
python-dotenv 0.19.2
pyyaml 6.0
six 1.16.0
sniffio 1.2.0
sqlalchemy 1.4.29
sqlalchemy2-stubs 0.0.2a19
sqlmodel 0.0.6
starlette 0.16.0
tomli 1.2.3
typing-extensions 4.0.1
uvicorn 0.16.0
uvloop 0.16.0
watchgod 0.7
websockets 10.1 | open | 2021-12-29T23:05:18Z | 2021-12-30T01:04:05Z | https://github.com/fastapi/sqlmodel/issues/208 | [
"question"
] | amir20 | 2 |
chezou/tabula-py | pandas | 125 | read_pdf returns None on my Linux only | # Summary of your issue
I have moved from Mac to Linux mint. I tried to run the read_pdf and every attempt results in a dataframe containing "None". I have not seen similar issue online.
# Environment
- [x] Paste the output of `import tabula; tabula.environment_info()` on Python REPL: ?
```
If not possible toPython version:
2.7.15 |Anaconda, Inc.| (default, May 1 2018, 23:32:55)
[GCC 7.2.0]
Java version:
openjdk version "1.8.0_03-Ubuntu"
OpenJDK Runtime Environment (build 1.8.0_03-Ubuntu-8u77-b03-3ubuntu3-b03)
OpenJDK 64-Bit Server VM (build 25.03-b03, mixed mode)
tabula-py version: 1.3.1
platform: Linux-4.4.0-21-generic-x86_64-with-debian-stretch-sid
uname:
('Linux', 'nawaf', '4.4.0-21-generic', '#37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016', 'x86_64', 'x86_64')
linux_distribution: (u'Ubuntu', u'16.04', u'Xenial Xerus')
mac_ver: ('', ('', '', ''), '') execute `tabula.environment_info()`, please answer following questions manually.
```
- [x] Paste the output of `python --version` command on your terminal: ?
`Python 2.7.15 :: Anaconda, Inc.`
- [x] Paste the output of `java -version` command on your terminal: ?
```
openjdk version "1.8.0_03-Ubuntu"
OpenJDK Runtime Environment (build 1.8.0_03-Ubuntu-8u77-b03-3ubuntu3-b03)
OpenJDK 64-Bit Server VM (build 25.03-b03, mixed mode)
```
- [x] Does `java -h` command work well?; Ensure your java command is included in `PATH`
```
Usage: java [-options] class [args...]
(to execute a class)
or java [-options] -jar jarfile [args...]
(to execute a jar file)
where options include:
-d32 use a 32-bit data model if available
-d64 use a 64-bit data model if available
-server to select the "server" VM
-zero to select the "zero" VM
-jamvm to select the "jamvm" VM
-dcevm to select the "dcevm" VM
The default VM is server,
because you are running on a server-class machine.
$PATH
bash: /usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:/usr/lib/jvm/java-1.8.0-openjdk-amd64/bin:/home/nalsabhan/anaconda2/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games: No such file or directory
```
- [x] Write your OS and it's version: ?
`Linux version 4.4.0-21-generic (buildd@lgw01-21) (gcc version 5.3.1 20160413 (Ubuntu 5.3.1-14ubuntu2) ) #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016`
- [x] (Optional, but really helpful) Your PDF URL: ?
https://www.tsu.ge/data/file_db/faculty_social_political/B2-nimushi.pdf
# What did you do when you faced the problem?
Tested multiple pdf files with text in them. Tried to update JAVA and made sure it is in the path
## Example code:
```
import tabula as tb
df = tb.read_pdf("pdfs/test.pdf", pages = [3])
print df
```
## Output:
```
None
```
## What did you intend it to be?
I expected a table with text scattered in it.
| closed | 2018-12-24T22:10:18Z | 2018-12-30T11:00:57Z | https://github.com/chezou/tabula-py/issues/125 | [] | nalsabhan | 5 |
waditu/tushare | pandas | 792 | stock_basic函数获得的结果不全 | ts_code
0 000001.SZ
1 000002.SZ
2 000004.SZ
3 000005.SZ
4 000006.SZ
5 000007.SZ
6 000008.SZ
7 000009.SZ
8 000010.SZ
9 000011.SZ
10 000012.SZ
11 000014.SZ
12 000016.SZ
13 000017.SZ
14 000018.SZ
15 000019.SZ
16 000020.SZ
17 000021.SZ
18 000022.SZ
19 000023.SZ
20 000025.SZ
21 000026.SZ
22 000027.SZ
23 000028.SZ
24 000029.SZ
25 000030.SZ
26 000031.SZ
27 000032.SZ
28 000034.SZ
29 000035.SZ
... ...
3526 603936.SH
3527 603937.SH
3528 603938.SH
3529 603939.SH
3530 603955.SH
3531 603958.SH
3532 603959.SH
3533 603960.SH
3534 603963.SH
3535 603966.SH
3536 603968.SH
3537 603969.SH
3538 603970.SH
3539 603976.SH
3540 603977.SH
3541 603978.SH
3542 603979.SH
3543 603980.SH
3544 603985.SH
3545 603986.SH
3546 603987.SH
3547 603988.SH
3548 603989.SH
3549 603990.SH
3550 603991.SH
3551 603993.SH
3552 603996.SH
3553 603997.SH
3554 603998.SH
3555 603999.SH
[3556 rows x 1 columns]
| closed | 2018-10-30T13:09:42Z | 2018-12-19T05:25:34Z | https://github.com/waditu/tushare/issues/792 | [] | zhanguoce | 5 |
saulpw/visidata | pandas | 2,379 | Can't disable mouse | **Small description**
Mouse-disable commands do nothing. In-session disable states invalid command. I'm running in a headless environment over SSH. Since I don't have access to a system clipboard, I need to be able to select values.
**Expected result**
I should be free to select terminal text with my mouse.
**Actual result with screenshot**
If you get an unexpected error, please include the full stack trace that you get with `Ctrl-E`.
Not disabled. Not including a screenshot in order to avoid confusing the issue.
**Steps to reproduce with sample data and a .vd**
First try reproducing without any user configuration by using the flag `-N`.
e.g. `echo "abc" | vd -f txt -N`
Tried adding to ~/.visidatarc:
```
options.mouse_interval = 0 # disables the mouse-click
options.scroll_incr = 0 # disables the scroll wheel
```
Tried disabling directly:
```
[SPACE] mouse-disable
```
Shows:
```
[...]| no binding for mouse-disable
```
Please attach the commandlog (saved with `Ctrl-D`) to show the steps that led to the issue.
See [here](http://visidata.org/docs/save-restore/) for more details.
Not necessary given that specific commands just aren't doing anything with no prior history.
**Additional context**
Please include the version of VisiData and Python.
VisiData: 1.5.2-1
Python: 3.8.10
VisiData was installed via Apt on Ubuntu 20.04.1 .
The bodge for my particular issue (needing to copy values out the remote terminal) is just to enter command mode, which makes everything in the terminal selectable.
| closed | 2024-04-12T05:17:25Z | 2024-04-12T19:52:02Z | https://github.com/saulpw/visidata/issues/2379 | [
"bug",
"fixed"
] | dsoprea | 2 |
ned2/slapdash | dash | 12 | Enable callback validation | Out of the box, a dynamic multi-page app like Slapdash requires callback validation to be turned off as callbacks will need to be defined that don't yet exist in the layout. the [Dash docs](https://dash.plot.ly/urls) have an example in the section titled "Dynamically Create a Layout for Multi-Page App Validation" that show how you can not suppress callback validation. | closed | 2018-12-29T03:39:06Z | 2022-10-19T12:35:35Z | https://github.com/ned2/slapdash/issues/12 | [] | ned2 | 2 |
InstaPy/InstaPy | automation | 6,450 | acc.txt | Yountrust | closed | 2022-01-03T00:44:32Z | 2022-01-08T19:28:15Z | https://github.com/InstaPy/InstaPy/issues/6450 | [] | liyahworks | 1 |
cvat-ai/cvat | computer-vision | 8,263 | I want to work as a data annotator | I want to work for cvat.ai as a data annotator can you help me how to start? | closed | 2024-08-06T13:47:09Z | 2024-08-06T16:19:27Z | https://github.com/cvat-ai/cvat/issues/8263 | [
"invalid"
] | fatmard947 | 0 |
dask/dask | scikit-learn | 10,881 | applying tuple with pyarrow | <!-- Please include a self-contained copy-pastable example that generates the issue if possible.
Please be concise with code posted. See guidelines below on how to provide a good bug report:
- Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports
- Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve
Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly.
-->
When applying tuple to a dask dataframe without pyarrow installed, it gives a column with tuples as expected. If instead we apply it with pyarrow installed, we get string dtypes instead.
The problem can be reproduced by the following commands in the console:
```bash
$ pyenv deactivate
$ pyenv virtualenv --clear 3.10.12 tuple10 # create a clear environment
$ pyenv activate tuple10
$ pip install dask[dataframe]==2024.1.1
$ python tuple_test.py # we expect a tuple to be the result
d
0 <class 'tuple'>
$ pip install pyarrow
$ python tuple_test.py # but with pyarrow we get a string instead
d
0 <class 'str'>
```
with tuple_test.py
```python
import dask.dataframe as dd
import pandas as pd
def apply_tuple_on_two_cols(
counts_df: dd.DataFrame,
):
counts_df["d"] = counts_df[["b", "c"]].apply(
tuple, axis=1, meta=pd.Series(dtype=object)
)
counts_df["d"] = counts_df["d"].apply(
type,
meta=pd.Series(dtype=object),
)
return counts_df[["d"]]
def test_tuple_application():
counts = dd.from_pandas(
pd.DataFrame({"a": ["1"], "b": ["2"], "c": [3]}), npartitions=1
)
result = apply_tuple_on_two_cols(counts)
print(result.compute())
if __name__ == "__main__":
test_tuple_application()
```
**Environment**:
- Dask version:2024.1.1
- Pyarrow version: 15.0.0
- Python version:3.10.12
- Operating System:Ubuntu 22.04
- Install method (conda, pip, source):pip
| open | 2024-02-01T15:20:45Z | 2024-02-02T08:38:50Z | https://github.com/dask/dask/issues/10881 | [
"convert-string"
] | SurkynRik | 2 |
gunthercox/ChatterBot | machine-learning | 1,555 | AttributeError: 'ChatBot' object has no attribute 'set_trainer' | Hi,
Just after installing ChatterBot ( version is 1.0.0a3.) , I tried to execute the following code snippet from quick start guide:
```
from chatterbot import ChatBot
chatbot = ChatBot("Ron Obvious")
from chatterbot.trainers import ListTrainer
conversation = [
"Hello",
"Hi there!",
"How are you doing?",
"I'm doing great.",
"That is good to hear",
"Thank you.",
"You're welcome."
]
chatbot.set_trainer(ListTrainer)
chatbot.train(conversation)
```
It failed to execute with the error, " AttributeError: 'ChatBot' object has no attribute 'set_trainer' ". I couldn't find any other post related to this attribute either.
I skimmed through the code of chatterbot.py and found ChatBot indeed has neither 'set_trainer' nor 'train' function.
Am I missing something here? I would really appreciate if anybody could help me here.
Thanks, | closed | 2019-01-09T09:55:18Z | 2020-12-27T06:32:24Z | https://github.com/gunthercox/ChatterBot/issues/1555 | [
"answered"
] | achingacham | 18 |
ultralytics/ultralytics | pytorch | 19,566 | Cleanest way to customize the model.val() method for custom validation. | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hello, I plan to customise the standard validation (see photo) to my needs (use of def ap50, def ap70 with difficulty levels). How do I have to do this to extend the validation cleanly? By default, the OBBValidator (based on DetectionValidator) is used for this.
**Current validation code:**
```python
from ultralytics import YOLO
model = YOLO('/home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/kitti_bev_yolo/run10_Adam_89.2_87.9/weights/best.pt', task='obb', verbose=False)
metrics = model.val_kitti(data='/home/heizung1/ultralytics_yolov8-obb_ob_kitti/ultralytics/cfg/datasets/kitti_bev.yaml', imgsz=640,
batch=16, save_json=False, conf=0.001, iou=0.5, max_det=300, half=False,
device='0', dnn=False, plots=False, rect=False, split='val', project=None, name=None)
```
**Current validation output:**

**Desired validation output:**
`Class` , `Images`, `Instances`, `Box(P R AP50 AP70): 100% etc.`
`AP50` and `AP70` categorized in difficulties columns: Easy, Moderate, Hard
The information for the difficulties are in my validation labels: `standard OBB format (1 0.223547 0.113517 0.223496 0.049611 0.258965 0.049583 0.259016 0.113489) difficulty infos (22.58 0.00 0)`.
For the training I modified: https://github.com/ultralytics/ultralytics/blob/23a90142dc66fbb180fe1bb513a2adc44322c978/ultralytics/data/utils.py#L97 to handle only the required label information.
I think it would be good to start in small steps, so my first consideration would be how to load the labels from a newly created .cache file that contains all the label information and then split it into two variables. One will be used to execute the standard processes (prediction, calculation IoU, metrics) and the other will be processed to develop 1 difficulty level from the 3 values.
Thank you for any useful input and guidance during this customization!
### Additional
_No response_ | open | 2025-03-07T09:44:22Z | 2025-03-15T03:28:25Z | https://github.com/ultralytics/ultralytics/issues/19566 | [
"question",
"OBB"
] | Petros626 | 13 |
idealo/image-super-resolution | computer-vision | 17 | Weights | Really simple issue, but the weights for Large RDN model were updated in the wget command, but not in the execution of ISR_Prediction_Tutorial.ipynb (it's downloading PSNR-driven/rdn-C6-D20-G64-G064-x2_PSNR_epoch086.hdf5, but calling weights/rdn-C6-D20-G64-G064-x2_div2k-e086.hdf5) | closed | 2019-04-09T11:28:39Z | 2019-04-11T16:29:05Z | https://github.com/idealo/image-super-resolution/issues/17 | [] | victorca25 | 1 |
plotly/dash-table | plotly | 742 | Feature request: Clear cell selection | Hi!
It's a common question at the [community](https://community.plotly.com/t/deselect-cell-in-data-table/25447).
I know we can use `Output("table", "selected_cells")` to set an empty cell selection, but it still leaves some selection box like this:

Is there a way to remove this as well?
It'd be nice to use `esc` as a hotkey to clear the cells selections. Or maybe click twice at a single selected cell to clear the selection. | open | 2020-04-13T23:50:30Z | 2020-04-13T23:50:30Z | https://github.com/plotly/dash-table/issues/742 | [] | victor-ab | 0 |
apache/airflow | automation | 48,009 | Restore support of starting mapped tasks from triggerer | ### Body
#48006 disabled starting mapped tasks from triggerer because it was crashing the scheduler (https://github.com/apache/airflow/issues/47735). It was discovered late in the airlfow 3 beta process so, disabling it was a reasonable choice.
When time permits, one could look into restoring this capability, perhaps with limitations.
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | open | 2025-03-20T13:39:31Z | 2025-03-24T16:54:20Z | https://github.com/apache/airflow/issues/48009 | [
"kind:feature",
"kind:meta",
"area:dynamic-task-mapping",
"area:Triggerer"
] | dstandish | 0 |
tensorpack/tensorpack | tensorflow | 1,276 | Error when try to register my own dataset | Hi
I think I got the same error than #1215 https://github.com/tensorpack/tensorpack/issues/1215.
This is the structure of my own dataset
COCO/DIR/
_______|__annotations/
_________________|__instances_train2017.json
_________________|__instances_val2017.json
_______|__train2017
_______|__val2017
As it was proposed on #1215 I used this code in coco.py:
```
def register_coco(basedir):
DatasetRegistry.register("train2017", lambda: COCODetection(basedir, "train2017"))
DatasetRegistry.register("val2017", lambda: COCODetection(basedir, "val2017"))
```
but I get this error:
```
Traceback (most recent call last):
File "train.py", line 74, in <module>
train_dataflow = get_train_dataflow()
File "/home/federicolondon2019/tensorpack/examples/FasterRCNN/data.py", line 391, in get_train_dataflow
roidbs = list(itertools.chain.from_iterable(DatasetRegistry.get(x).training_roidbs() for x in cfg.DATA.TRAIN))
File "/home/federicolondon2019/tensorpack/examples/FasterRCNN/data.py", line 391, in <genexpr>
roidbs = list(itertools.chain.from_iterable(DatasetRegistry.get(x).training_roidbs() for x in cfg.DATA.TRAIN))
File "/home/federicolondon2019/tensorpack/examples/FasterRCNN/dataset/dataset.py", line 90, in get
assert name in DatasetRegistry._registry, "Dataset {} was not registered!".format(name)
AssertionError: Dataset t was not registered!
```
I changed coco.py and config.py to adapt my own dataset to the code.
Below coco.py, config.py and the log.
Hopefully you can help me.
Thanks!
If you're asking about an unexpected problem which you do not know the root cause,
use this template. __PLEASE DO NOT DELETE THIS TEMPLATE, FILL IT__:
If you already know the root cause to your problem,
feel free to delete everything in this template.
### 1. What you did:
(1) **If you're using examples, what's the command you run:**
train.py --config MODE_MASK=True MODE_FPN=True DATA.BASEDIR=/home/federicolondon2019/tensorpack/COCO/DIR BACKBONE.WEIGHTS=/home/federicolondon2019/tensorpack/models/ImageNet-R50-AlignPadding.npz
(2) **If you're using examples, have you made any changes to the examples? Paste `git status; git diff` here:**
COCO.PY (changed version)
```
import json
import numpy as np
import os
import tqdm
from tensorpack.utils import logger
from tensorpack.utils.timer import timed_operation
from config import config as cfg
from dataset import DatasetRegistry, DatasetSplit
__all__ = ['register_coco']
class COCODetection(DatasetSplit):
# handle the weird (but standard) split of train and val
_INSTANCE_TO_BASEDIR = {
'valminusminival2014': 'val2017',
'minival2014': 'val2017',
}
"""
Mapping from the incontinuous COCO category id to an id in [1, #category]
For your own coco-format, dataset, change this to an **empty dict**.
"""
COCO_id_to_category_id = {1: 1, 2: 2, 3: 3, 4: 4, 5: 5, 6: 6, 7: 7, 8: 8, 9: 9, 10: 10, 11: 11, 12: 12, 13: 13, 14: 14, 15: 15, 16: 16, 17: 17, 18: 18, 19: 19, 20: 20, 21: 21, 22: 22, 23: 23, 24: 24, 25: 25, 26: 26, 27: 27, 28: 28, 29: 29, 30: 30, 31: 31, 32: 32, 33: 33, 34: 34, 35: 35, 36: 36, 37: 37} # noqa
"""
80 names for COCO
For your own coco-format dataset, change this.
"""
class_names = [
'Bird', 'Ground_Animal', 'Crosswalk_Plain', 'Person', 'Bicyclist', 'Motorcyclist', 'Other_Rider', 'Lane_Marking_-_Crosswalk', 'Banner', 'Bench', 'Bike_Rack', 'Billboard', 'Catch_Basin', 'CCTV_Camera', 'Fire_Hydrant', 'Junction_Box', 'Mailbox', 'Manhole', 'Phone_Booth', 'Street_Light', 'Pole', 'Traffic_Sign_Frame', 'Utility_Pole', 'Traffic_Light', 'Traffic_Sign_(Back)', 'Traffic_Sign_(Front)', 'Trash_Can', 'Bicycle', 'Boat', 'Bus', 'Car', 'Caravan', 'Motorcycle', 'Other_Vehicle', 'Trailer', 'Truck', 'Wheeled_Slow'] # noqa
cfg.DATA.CLASS_NAMES = class_names
def __init__(self, basedir, split):
"""
Args:
basedir (str): root of the dataset which contains the subdirectories for each split and annotations
split (str): the name of the split, e.g. "train2017".
The split has to match an annotation file in "annotations/" and a directory of images.
Examples:
For a directory of this structure:
DIR/
annotations/
instances_XX.json
instances_YY.json
XX/
YY/
use `COCODetection(DIR, 'XX')` and `COCODetection(DIR, 'YY')`
"""
basedir = os.path.expanduser(basedir)
self._imgdir = os.path.realpath(os.path.join(
basedir, self._INSTANCE_TO_BASEDIR.get(split, split)))
assert os.path.isdir(self._imgdir), "{} is not a directory!".format(self._imgdir)
annotation_file = os.path.join(
basedir, 'annotations/instances_{}.json'.format(split))
assert os.path.isfile(annotation_file), annotation_file
from pycocotools.coco import COCO
self.coco = COCO(annotation_file)
self.annotation_file = annotation_file
logger.info("Instances loaded from {}.".format(annotation_file))
# https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
def print_coco_metrics(self, json_file):
"""
Args:
json_file (str): path to the results json file in coco format
Returns:
dict: the evaluation metrics
"""
from pycocotools.cocoeval import COCOeval
ret = {}
cocoDt = self.coco.loadRes(json_file)
cocoEval = COCOeval(self.coco, cocoDt, 'bbox')
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
fields = ['IoU=0.5:0.95', 'IoU=0.5', 'IoU=0.75', 'small', 'medium', 'large']
for k in range(6):
ret['mAP(bbox)/' + fields[k]] = cocoEval.stats[k]
json_obj = json.load(open(json_file))
if len(json_obj) > 0 and 'segmentation' in json_obj[0]:
cocoEval = COCOeval(self.coco, cocoDt, 'segm')
cocoEval.evaluate()
cocoEval.accumulate()
cocoEval.summarize()
for k in range(6):
ret['mAP(segm)/' + fields[k]] = cocoEval.stats[k]
return ret
def load(self, add_gt=True, add_mask=False):
"""
Args:
add_gt: whether to add ground truth bounding box annotations to the dicts
add_mask: whether to also add ground truth mask
Returns:
a list of dict, each has keys including:
'image_id', 'file_name',
and (if add_gt is True) 'boxes', 'class', 'is_crowd', and optionally
'segmentation'.
"""
with timed_operation('Load annotations for {}'.format(
os.path.basename(self.annotation_file))):
img_ids = self.coco.getImgIds()
img_ids.sort()
# list of dict, each has keys: height,width,id,file_name
imgs = self.coco.loadImgs(img_ids)
for idx, img in enumerate(tqdm.tqdm(imgs)):
img['image_id'] = img.pop('id')
img['file_name'] = os.path.join(self._imgdir, img['file_name'])
if idx == 0:
# make sure the directories are correctly set
assert os.path.isfile(img["file_name"]), img["file_name"]
if add_gt:
self._add_detection_gt(img, add_mask)
return imgs
def _add_detection_gt(self, img, add_mask):
"""
Add 'boxes', 'class', 'is_crowd' of this image to the dict, used by detection.
If add_mask is True, also add 'segmentation' in coco poly format.
"""
# ann_ids = self.coco.getAnnIds(imgIds=img['image_id'])
# objs = self.coco.loadAnns(ann_ids)
objs = self.coco.imgToAnns[img['image_id']] # equivalent but faster than the above two lines
if 'minival' not in self.annotation_file:
# TODO better to check across the entire json, rather than per-image
ann_ids = [ann["id"] for ann in objs]
assert len(set(ann_ids)) == len(ann_ids), \
"Annotation ids in '{}' are not unique!".format(self.annotation_file)
# clean-up boxes
width = img.pop('width')
height = img.pop('height')
all_boxes = []
all_segm = []
all_cls = []
all_iscrowd = []
for objid, obj in enumerate(objs):
if obj.get('ignore', 0) == 1:
continue
x1, y1, w, h = list(map(float, obj['bbox']))
# bbox is originally in float
# x1/y1 means upper-left corner and w/h means true w/h. This can be verified by segmentation pixels.
# But we do make an assumption here that (0.0, 0.0) is upper-left corner of the first pixel
x2, y2 = x1 + w, y1 + h
# np.clip would be quite slow here
x1 = min(max(x1, 0), width)
x2 = min(max(x2, 0), width)
y1 = min(max(y1, 0), height)
y2 = min(max(y2, 0), height)
w, h = x2 - x1, y2 - y1
# Require non-zero seg area and more than 1x1 box size
if obj['area'] > 1 and w > 0 and h > 0 and w * h >= 4:
all_boxes.append([x1, y1, x2, y2])
all_cls.append(self.COCO_id_to_category_id.get(obj['category_id'], obj['category_id']))
iscrowd = obj.get("iscrowd", 0)
all_iscrowd.append(iscrowd)
if add_mask:
segs = obj['segmentation']
if not isinstance(segs, list):
assert iscrowd == 1
all_segm.append(None)
else:
valid_segs = [np.asarray(p).reshape(-1, 2).astype('float32') for p in segs if len(p) >= 6]
if len(valid_segs) == 0:
logger.error("Object {} in image {} has no valid polygons!".format(objid, img['file_name']))
elif len(valid_segs) < len(segs):
logger.warn("Object {} in image {} has invalid polygons!".format(objid, img['file_name']))
all_segm.append(valid_segs)
# all geometrically-valid boxes are returned
if len(all_boxes):
img['boxes'] = np.asarray(all_boxes, dtype='float32') # (n, 4)
else:
img['boxes'] = np.zeros((0, 4), dtype='float32')
cls = np.asarray(all_cls, dtype='int32') # (n,)
if len(cls):
assert cls.min() > 0, "Category id in COCO format must > 0!"
img['class'] = cls # n, always >0
img['is_crowd'] = np.asarray(all_iscrowd, dtype='int8') # n,
if add_mask:
# also required to be float32
img['segmentation'] = all_segm
def training_roidbs(self):
return self.load(add_gt=True, add_mask=cfg.MODE_MASK)
def inference_roidbs(self):
return self.load(add_gt=False)
def eval_inference_results(self, results, output):
continuous_id_to_COCO_id = {v: k for k, v in self.COCO_id_to_category_id.items()}
for res in results:
# convert to COCO's incontinuous category id
if res['category_id'] in continuous_id_to_COCO_id:
res['category_id'] = continuous_id_to_COCO_id[res['category_id']]
# COCO expects results in xywh format
box = res['bbox']
box[2] -= box[0]
box[3] -= box[1]
res['bbox'] = [round(float(x), 3) for x in box]
assert output is not None, "COCO evaluation requires an output file!"
with open(output, 'w') as f:
json.dump(results, f)
if len(results):
# sometimes may crash if the results are empty?
return self.print_coco_metrics(output)
else:
return {}
def register_coco(basedir):
DatasetRegistry.register("train2017", lambda: COCODetection(basedir, "train2017"))
DatasetRegistry.register("val2017", lambda: COCODetection(basedir, "val2017"))
if __name__ == '__main__':
basedir = '~/data/coco'
c = COCODetection(basedir, 'train2014')
roidb = c.load(add_gt=True, add_mask=True)
print("#Images:", len(roidb))
```
CONFIG.PY (cahnged version)
```
import numpy as np
import os
import pprint
import six
from tensorpack.utils import logger
from tensorpack.utils.gpu import get_num_gpu
__all__ = ['config', 'finalize_configs']
class AttrDict():
_freezed = False
""" Avoid accidental creation of new hierarchies. """
def __getattr__(self, name):
if self._freezed:
raise AttributeError(name)
if name.startswith('_'):
# Do not mess with internals. Otherwise copy/pickle will fail
raise AttributeError(name)
ret = AttrDict()
setattr(self, name, ret)
return ret
def __setattr__(self, name, value):
if self._freezed and name not in self.__dict__:
raise AttributeError(
"Config was freezed! Unknown config: {}".format(name))
super().__setattr__(name, value)
def __str__(self):
return pprint.pformat(self.to_dict(), indent=1, width=100, compact=True)
__repr__ = __str__
def to_dict(self):
"""Convert to a nested dict. """
return {k: v.to_dict() if isinstance(v, AttrDict) else v
for k, v in self.__dict__.items() if not k.startswith('_')}
def update_args(self, args):
"""Update from command line args. """
for cfg in args:
keys, v = cfg.split('=', maxsplit=1)
keylist = keys.split('.')
dic = self
for i, k in enumerate(keylist[:-1]):
assert k in dir(dic), "Unknown config key: {}".format(keys)
dic = getattr(dic, k)
key = keylist[-1]
oldv = getattr(dic, key)
if not isinstance(oldv, str):
v = eval(v)
setattr(dic, key, v)
def freeze(self, freezed=True):
self._freezed = freezed
for v in self.__dict__.values():
if isinstance(v, AttrDict):
v.freeze(freezed)
# avoid silent bugs
def __eq__(self, _):
raise NotImplementedError()
def __ne__(self, _):
raise NotImplementedError()
config = AttrDict()
_C = config # short alias to avoid coding
# mode flags ---------------------
_C.TRAINER = 'horovod' # options: 'horovod', 'replicated'
_C.MODE_MASK = True # FasterRCNN or MaskRCNN
_C.MODE_FPN = False
# dataset -----------------------
_C.DATA.BASEDIR = '/home/federicolondon2019/tensorpack/COCO/DIR'
# All TRAIN dataset will be concatenated for training.
_C.DATA.TRAIN = ('train2017') # i.e. trainval35k, AKA train2017
# Each VAL dataset will be evaluated separately (instead of concatenated)
_C.DATA.VAL = ('val2017', ) # AKA val2017
# This two config will be populated later by the dataset loader:
_C.DATA.NUM_CATEGORY = 37 # without the background class (e.g., 80 for COCO)
_C.DATA.CLASS_NAMES = [] # NUM_CLASS (NUM_CATEGORY+1) strings, the first is "BG".
# whether the coordinates in the annotations are absolute pixel values, or a relative value in [0, 1]
_C.DATA.ABSOLUTE_COORD = True
# Number of data loading workers.
# In case of horovod training, this is the number of workers per-GPU (so you may want to use a smaller number).
# Set to 0 to disable parallel data loading
_C.DATA.NUM_WORKERS = 10
# backbone ----------------------
_C.BACKBONE.WEIGHTS = '' # /path/to/weights.npz
_C.BACKBONE.RESNET_NUM_BLOCKS = [3, 4, 6, 3] # for resnet50
# RESNET_NUM_BLOCKS = [3, 4, 23, 3] # for resnet101
_C.BACKBONE.FREEZE_AFFINE = False # do not train affine parameters inside norm layers
_C.BACKBONE.NORM = 'FreezeBN' # options: FreezeBN, SyncBN, GN, None
_C.BACKBONE.FREEZE_AT = 2 # options: 0, 1, 2
# Use a base model with TF-preferred padding mode,
# which may pad more pixels on right/bottom than top/left.
# See https://github.com/tensorflow/tensorflow/issues/18213
# In tensorpack model zoo, ResNet models with TF_PAD_MODE=False are marked with "-AlignPadding".
# All other models under `ResNet/` in the model zoo are using TF_PAD_MODE=True.
# Using either one should probably give the same performance.
# We use the "AlignPadding" one just to be consistent with caffe2.
_C.BACKBONE.TF_PAD_MODE = False
_C.BACKBONE.STRIDE_1X1 = False # True for MSRA models
# schedule -----------------------
_C.TRAIN.NUM_GPUS = None # by default, will be set from code
_C.TRAIN.WEIGHT_DECAY = 1e-4
_C.TRAIN.BASE_LR = 1e-2 # defined for total batch size=8. Otherwise it will be adjusted automatically
_C.TRAIN.WARMUP = 1000 # in terms of iterations. This is not affected by #GPUs
_C.TRAIN.WARMUP_INIT_LR = 1e-2 * 0.33 # defined for total batch size=8. Otherwise it will be adjusted automatically
_C.TRAIN.STEPS_PER_EPOCH = 500
_C.TRAIN.STARTING_EPOCH = 1 # the first epoch to start with, useful to continue a training
# LR_SCHEDULE means equivalent steps when the total batch size is 8.
# When the total bs!=8, the actual iterations to decrease learning rate, and
# the base learning rate are computed from BASE_LR and LR_SCHEDULE.
# Therefore, there is *no need* to modify the config if you only change the number of GPUs.
_C.TRAIN.LR_SCHEDULE = [120000, 160000, 180000] # "1x" schedule in detectron
# _C.TRAIN.LR_SCHEDULE = [240000, 320000, 360000] # "2x" schedule in detectron
# Longer schedules for from-scratch training (https://arxiv.org/abs/1811.08883):
# _C.TRAIN.LR_SCHEDULE = [960000, 1040000, 1080000] # "6x" schedule in detectron
# _C.TRAIN.LR_SCHEDULE = [1500000, 1580000, 1620000] # "9x" schedule in detectron
_C.TRAIN.EVAL_PERIOD = 25 # period (epochs) to run evaluation
# preprocessing --------------------
# Alternative old (worse & faster) setting: 600
_C.PREPROC.TRAIN_SHORT_EDGE_SIZE = [800, 800] # [min, max] to sample from
_C.PREPROC.TEST_SHORT_EDGE_SIZE = 800
_C.PREPROC.MAX_SIZE = 1333
# mean and std in RGB order.
# Un-scaled version: [0.485, 0.456, 0.406], [0.229, 0.224, 0.225]
_C.PREPROC.PIXEL_MEAN = [123.675, 116.28, 103.53]
_C.PREPROC.PIXEL_STD = [58.395, 57.12, 57.375]
# anchors -------------------------
_C.RPN.ANCHOR_STRIDE = 16
_C.RPN.ANCHOR_SIZES = (32, 64, 128, 256, 512) # sqrtarea of the anchor box
_C.RPN.ANCHOR_RATIOS = (0.5, 1., 2.)
_C.RPN.POSITIVE_ANCHOR_THRESH = 0.7
_C.RPN.NEGATIVE_ANCHOR_THRESH = 0.3
# rpn training -------------------------
_C.RPN.FG_RATIO = 0.5 # fg ratio among selected RPN anchors
_C.RPN.BATCH_PER_IM = 256 # total (across FPN levels) number of anchors that are marked valid
_C.RPN.MIN_SIZE = 0
_C.RPN.PROPOSAL_NMS_THRESH = 0.7
# Anchors which overlap with a crowd box (IOA larger than threshold) will be ignored.
# Setting this to a value larger than 1.0 will disable the feature.
# It is disabled by default because Detectron does not do this.
_C.RPN.CROWD_OVERLAP_THRESH = 9.99
_C.RPN.HEAD_DIM = 1024 # used in C4 only
# RPN proposal selection -------------------------------
# for C4
_C.RPN.TRAIN_PRE_NMS_TOPK = 12000
_C.RPN.TRAIN_POST_NMS_TOPK = 2000
_C.RPN.TEST_PRE_NMS_TOPK = 6000
_C.RPN.TEST_POST_NMS_TOPK = 1000 # if you encounter OOM in inference, set this to a smaller number
# for FPN, #proposals per-level and #proposals after merging are (for now) the same
# if FPN.PROPOSAL_MODE = 'Joint', these options have no effect
_C.RPN.TRAIN_PER_LEVEL_NMS_TOPK = 2000
_C.RPN.TEST_PER_LEVEL_NMS_TOPK = 1000
# fastrcnn training ---------------------
_C.FRCNN.BATCH_PER_IM = 512
_C.FRCNN.BBOX_REG_WEIGHTS = [10., 10., 5., 5.] # Slightly better setting: 20, 20, 10, 10
_C.FRCNN.FG_THRESH = 0.5
_C.FRCNN.FG_RATIO = 0.25 # fg ratio in a ROI batch
# FPN -------------------------
_C.FPN.ANCHOR_STRIDES = (4, 8, 16, 32, 64) # strides for each FPN level. Must be the same length as ANCHOR_SIZES
_C.FPN.PROPOSAL_MODE = 'Level' # 'Level', 'Joint'
_C.FPN.NUM_CHANNEL = 256
_C.FPN.NORM = 'None' # 'None', 'GN'
# The head option is only used in FPN. For C4 models, the head is C5
_C.FPN.FRCNN_HEAD_FUNC = 'fastrcnn_2fc_head'
# choices: fastrcnn_2fc_head, fastrcnn_4conv1fc_{,gn_}head
_C.FPN.FRCNN_CONV_HEAD_DIM = 256
_C.FPN.FRCNN_FC_HEAD_DIM = 1024
_C.FPN.MRCNN_HEAD_FUNC = 'maskrcnn_up4conv_head' # choices: maskrcnn_up4conv_{,gn_}head
# Mask-RCNN
_C.MRCNN.HEAD_DIM = 256
# Cascade-RCNN, only available in FPN mode
_C.FPN.CASCADE = False
_C.CASCADE.IOUS = [0.5, 0.6, 0.7]
_C.CASCADE.BBOX_REG_WEIGHTS = [[10., 10., 5., 5.], [20., 20., 10., 10.], [30., 30., 15., 15.]]
# testing -----------------------
_C.TEST.FRCNN_NMS_THRESH = 0.5
# Smaller threshold value gives significantly better mAP. But we use 0.05 for consistency with Detectron.
# mAP with 1e-4 threshold can be found at https://github.com/tensorpack/tensorpack/commit/26321ae58120af2568bdbf2269f32aa708d425a8#diff-61085c48abee915b584027e1085e1043 # noqa
_C.TEST.RESULT_SCORE_THRESH = 0.05
_C.TEST.RESULT_SCORE_THRESH_VIS = 0.5 # only visualize confident results
_C.TEST.RESULTS_PER_IM = 100
_C.freeze() # avoid typo / wrong config keys
def finalize_configs(is_training):
"""
Run some sanity checks, and populate some configs from others
"""
_C.freeze(False) # populate new keys now
if isinstance(_C.DATA.VAL, six.string_types): # support single string (the typical case) as well
_C.DATA.VAL = (_C.DATA.VAL, )
assert _C.BACKBONE.NORM in ['FreezeBN', 'SyncBN', 'GN', 'None'], _C.BACKBONE.NORM
if _C.BACKBONE.NORM != 'FreezeBN':
assert not _C.BACKBONE.FREEZE_AFFINE
assert _C.BACKBONE.FREEZE_AT in [0, 1, 2]
_C.RPN.NUM_ANCHOR = len(_C.RPN.ANCHOR_SIZES) * len(_C.RPN.ANCHOR_RATIOS)
assert len(_C.FPN.ANCHOR_STRIDES) == len(_C.RPN.ANCHOR_SIZES)
# image size into the backbone has to be multiple of this number
_C.FPN.RESOLUTION_REQUIREMENT = _C.FPN.ANCHOR_STRIDES[3] # [3] because we build FPN with features r2,r3,r4,r5
if _C.MODE_FPN:
size_mult = _C.FPN.RESOLUTION_REQUIREMENT * 1.
_C.PREPROC.MAX_SIZE = np.ceil(_C.PREPROC.MAX_SIZE / size_mult) * size_mult
assert _C.FPN.PROPOSAL_MODE in ['Level', 'Joint']
assert _C.FPN.FRCNN_HEAD_FUNC.endswith('_head')
assert _C.FPN.MRCNN_HEAD_FUNC.endswith('_head')
assert _C.FPN.NORM in ['None', 'GN']
if _C.FPN.CASCADE:
# the first threshold is the proposal sampling threshold
assert _C.CASCADE.IOUS[0] == _C.FRCNN.FG_THRESH
assert len(_C.CASCADE.BBOX_REG_WEIGHTS) == len(_C.CASCADE.IOUS)
if is_training:
train_scales = _C.PREPROC.TRAIN_SHORT_EDGE_SIZE
if isinstance(train_scales, (list, tuple)) and train_scales[1] - train_scales[0] > 100:
# don't autotune if augmentation is on
os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0'
os.environ['TF_AUTOTUNE_THRESHOLD'] = '1'
assert _C.TRAINER in ['horovod', 'replicated'], _C.TRAINER
# setup NUM_GPUS
if _C.TRAINER == 'horovod':
import horovod.tensorflow as hvd
ngpu = hvd.size()
else:
assert 'OMPI_COMM_WORLD_SIZE' not in os.environ
ngpu = get_num_gpu()
assert ngpu > 0, "Has to train with GPU!"
assert ngpu % 8 == 0 or 8 % ngpu == 0, "Can only train with 1,2,4 or >=8 GPUs, but found {} GPUs".format(ngpu)
else:
# autotune is too slow for inference
os.environ['TF_CUDNN_USE_AUTOTUNE'] = '0'
ngpu = get_num_gpu()
if _C.TRAIN.NUM_GPUS is None:
_C.TRAIN.NUM_GPUS = ngpu
else:
if _C.TRAINER == 'horovod':
assert _C.TRAIN.NUM_GPUS == ngpu
else:
assert _C.TRAIN.NUM_GPUS <= ngpu
_C.freeze()
logger.info("Config: ------------------------------------------\n" + str(_C))
```
(3) **If not using examples, tell us what you did:**
It's always better to copy-paste what you did than to describe them.
Please try to provide enough information to let others __reproduce__ your issues.
Without reproducing the issue, we may not be able to investigate it.
### 2. What you observed:
(1) **Include the ENTIRE logs here:**
```
[0720 20:32:58 @logger.py:90] Argv: train.py --config MODE_MASK=True MODE_FPN=True DATA.BASEDIR=/home/federicolondon2019/tensorpack/COCO/DIR BACKBONE.WEIGHTS=/home/federicolondon2019/tensorpack/models/ImageNet-R50-AlignPadding.npz
[0720 20:32:58 @train.py:55] Environment Information:
-------------------- -----------------------------------------------------------
sys.platform linux
Python 3.5.3 (default, Sep 27 2018, 17:25:39) [GCC 6.3.0 20170516]
Tensorpack v0.9.4-37-g59829770-dirty
Numpy 1.16.3
TensorFlow 1.13.1/b'v1.13.1-0-g6612da8951'
TF Compiler Version 4.8.5
TF CUDA support True
TF MKL support False
Nvidia Driver /usr/lib/x86_64-linux-gnu/libnvidia-ml.so.410.72
CUDA /usr/local/cuda-10.0/lib64/libcudart.so.10.0.130
CUDNN /usr/local/cuda-10.0/lib64/libcudnn.so.7.4.1
NCCL /usr/local/nccl2/lib/libnccl.so.2.3.4
CUDA_VISIBLE_DEVICES None
GPU 0,1 Tesla T4
Free RAM 57.83/58.99 GB
CPU Count 16
horovod 0.16.0
cv2 4.1.0
msgpack 0.6.1
python-prctl True
-------------------- -----------------------------------------------------------
[0720 20:32:58 @config.py:279] Config: ------------------------------------------
{'BACKBONE': {'FREEZE_AFFINE': False,
'FREEZE_AT': 2,
'NORM': 'FreezeBN',
'RESNET_NUM_BLOCKS': [3, 4, 6, 3],
'STRIDE_1X1': False,
'TF_PAD_MODE': False,
'WEIGHTS': '/home/federicolondon2019/tensorpack/models/ImageNet-R50-AlignPadding.npz'},
'CASCADE': {'BBOX_REG_WEIGHTS': [[10.0, 10.0, 5.0, 5.0], [20.0, 20.0, 10.0, 10.0],
[30.0, 30.0, 15.0, 15.0]],
'IOUS': [0.5, 0.6, 0.7]},
'DATA': {'ABSOLUTE_COORD': True,
'BASEDIR': '/home/federicolondon2019/tensorpack/COCO/DIR',
'CLASS_NAMES': ['Bird', 'Ground_Animal', 'Crosswalk_Plain', 'Person', 'Bicyclist',
'Motorcyclist', 'Other_Rider', 'Lane_Marking_-_Crosswalk', 'Banner',
'Bench', 'Bike_Rack', 'Billboard', 'Catch_Basin', 'CCTV_Camera',
'Fire_Hydrant', 'Junction_Box', 'Mailbox', 'Manhole', 'Phone_Booth',
'Street_Light', 'Pole', 'Traffic_Sign_Frame', 'Utility_Pole',
'Traffic_Light', 'Traffic_Sign_(Back)', 'Traffic_Sign_(Front)',
'Trash_Can', 'Bicycle', 'Boat', 'Bus', 'Car', 'Caravan', 'Motorcycle',
'Other_Vehicle', 'Trailer', 'Truck', 'Wheeled_Slow'],
'NUM_CATEGORY': 37,
'NUM_WORKERS': 10,
'TRAIN': 'train2017',
'VAL': ('val2017',)},
'FPN': {'ANCHOR_STRIDES': (4, 8, 16, 32, 64),
'CASCADE': False,
'FRCNN_CONV_HEAD_DIM': 256,
'FRCNN_FC_HEAD_DIM': 1024,
'FRCNN_HEAD_FUNC': 'fastrcnn_2fc_head',
'MRCNN_HEAD_FUNC': 'maskrcnn_up4conv_head',
'NORM': 'None',
'NUM_CHANNEL': 256,
'PROPOSAL_MODE': 'Level',
'RESOLUTION_REQUIREMENT': 32},
'FRCNN': {'BATCH_PER_IM': 512,
'BBOX_REG_WEIGHTS': [10.0, 10.0, 5.0, 5.0],
'FG_RATIO': 0.25,
'FG_THRESH': 0.5},
'MODE_FPN': True,
'MODE_MASK': True,
'MRCNN': {'HEAD_DIM': 256},
'PREPROC': {'MAX_SIZE': 1344.0,
'PIXEL_MEAN': [123.675, 116.28, 103.53],
'PIXEL_STD': [58.395, 57.12, 57.375],
'TEST_SHORT_EDGE_SIZE': 800,
'TRAIN_SHORT_EDGE_SIZE': [800, 800]},
'RPN': {'ANCHOR_RATIOS': (0.5, 1.0, 2.0),
'ANCHOR_SIZES': (32, 64, 128, 256, 512),
'ANCHOR_STRIDE': 16,
'BATCH_PER_IM': 256,
'CROWD_OVERLAP_THRESH': 9.99,
'FG_RATIO': 0.5,
'HEAD_DIM': 1024,
'MIN_SIZE': 0,
'NEGATIVE_ANCHOR_THRESH': 0.3,
'NUM_ANCHOR': 15,
'POSITIVE_ANCHOR_THRESH': 0.7,
'PROPOSAL_NMS_THRESH': 0.7,
'TEST_PER_LEVEL_NMS_TOPK': 1000,
'TEST_POST_NMS_TOPK': 1000,
'TEST_PRE_NMS_TOPK': 6000,
'TRAIN_PER_LEVEL_NMS_TOPK': 2000,
'TRAIN_POST_NMS_TOPK': 2000,
'TRAIN_PRE_NMS_TOPK': 12000},
'TEST': {'FRCNN_NMS_THRESH': 0.5,
'RESULTS_PER_IM': 100,
'RESULT_SCORE_THRESH': 0.05,
'RESULT_SCORE_THRESH_VIS': 0.5},
'TRAIN': {'BASE_LR': 0.01,
'EVAL_PERIOD': 25,
'LR_SCHEDULE': [120000, 160000, 180000],
'NUM_GPUS': 1,
'STARTING_EPOCH': 1,
'STEPS_PER_EPOCH': 500,
'WARMUP': 1000,
'WARMUP_INIT_LR': 0.0033000000000000004,
'WEIGHT_DECAY': 0.0001},
'TRAINER': 'horovod'}
[0720 20:32:58 @train.py:72] Warm Up Schedule (steps, value): [(0, 0.0033000000000000004), (1000, 0.01)]
[0720 20:32:58 @train.py:73] LR Schedule (epochs, value): [(2, 0.01), (1920.0, 0.001), (2560.0, 0.00010000000000000002)]
Traceback (most recent call last):
File "train.py", line 74, in <module>
train_dataflow = get_train_dataflow()
File "/home/federicolondon2019/tensorpack/examples/FasterRCNN/data.py", line 391, in get_train_dataflow
roidbs = list(itertools.chain.from_iterable(DatasetRegistry.get(x).training_roidbs() for x in cfg.DATA.TRAIN))
File "/home/federicolondon2019/tensorpack/examples/FasterRCNN/data.py", line 391, in <genexpr>
roidbs = list(itertools.chain.from_iterable(DatasetRegistry.get(x).training_roidbs() for x in cfg.DATA.TRAIN))
File "/home/federicolondon2019/tensorpack/examples/FasterRCNN/dataset/dataset.py", line 90, in get
assert name in DatasetRegistry._registry, "Dataset {} was not registered!".format(name)
AssertionError: Dataset t was not registered!
```
It's always better to copy-paste what you observed instead of describing them.
It's always better to paste **as much as possible**, although sometimes a partial log is OK.
Tensorpack typically saves stdout to its training log.
If stderr is relevant, you can run a command with `my_command 2>&1 | tee logs.txt`
to save both stdout and stderr to one file.
(2) **Other observations, if any:**
For example, CPU/GPU utilization, output images, tensorboard curves, if relevant to your issue.
### 3. What you expected, if not obvious.
If you expect higher speed, please read
http://tensorpack.readthedocs.io/tutorial/performance-tuning.html
before posting.
If you expect certain training results (e.g., accuracy), only in one of the two conditions can we help with it:
(1) You're unable to reproduce the results documented in tensorpack examples.
(2) It appears to be a tensorpack bug.
Otherwise, how to train a model is a machine learning question.
We do not answer machine learning questions and it is your responsibility to
figure out how to make your models more accurate.
### 4. Your environment:
+ Paste the output of this command: `python -c 'import tensorpack.tfutils as u; print(u.collect_env_info())'`
If this command failed, tell us your version of Python/TF/tensorpack.
+ You can install Tensorpack master by `pip install -U git+https://github.com/ppwwyyxx/tensorpack.git`
and see if your issue is already solved.
+ If you're not using tensorpack under a normal command line shell (e.g.,
using an IDE or jupyter notebook), please retry under a normal command line shell.
+ Include relevant hardware information, e.g. number of GPUs used for training, amount of RAM.
You may often want to provide extra information related to your issue, but
at the minimum please try to provide the above information __accurately__ to save effort in the investigation.
| closed | 2019-07-20T21:07:26Z | 2019-07-26T02:50:33Z | https://github.com/tensorpack/tensorpack/issues/1276 | [
"examples"
] | AlbertoMCS | 9 |
igorbenav/fastcrud | sqlalchemy | 81 | Deprecation warning missing from Depends handling | closed | 2024-05-10T05:07:32Z | 2024-05-10T05:16:16Z | https://github.com/igorbenav/fastcrud/issues/81 | [
"enhancement",
"Automatic Endpoint"
] | igorbenav | 0 | |
pyg-team/pytorch_geometric | pytorch | 9,600 | bunch of CI failures with latest updates | ### 🐛 Describe the bug
when updating from 8c849a482c3cf2326c1f493e79d04169b26dfb0b to the latest commit c0c2d5fefddbce412741db68cc7a74af225fa94a
we now see the following errors (their all pretty much the same, let me know if you want the full log)
```
______________________________ test_to_undirected ______________________________
def test_to_undirected():
row = torch.tensor([0, 1, 1])
col = torch.tensor([1, 0, 2])
edge_index = to_undirected(torch.stack([row, col], dim=0))
assert edge_index.tolist() == [[0, 1, 1, 2], [1, 0, 2, 1]]
@torch.jit.script
> def jit(edge_index: Tensor) -> Tensor:
test/utils/test_undirected.py:37:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1428: in script
ret = _script_impl(
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1204: in _script_impl
fn = torch._C._jit_script_compile(
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1498: in _get_overloads
_compile_function_with_overload(overload_fn, qual_name, obj)
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1471: in _compile_function_with_overload
fn = torch._C._jit_script_compile_overload(
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1498: in _get_overloads
_compile_function_with_overload(overload_fn, qual_name, obj)
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1471: in _compile_function_with_overload
fn = torch._C._jit_script_compile_overload(
/usr/local/lib/python3.10/dist-packages/torch/jit/_recursive.py:1003: in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1428: in script
ret = _script_impl(
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1204: in _script_impl
fn = torch._C._jit_script_compile(
/usr/local/lib/python3.10/dist-packages/torch/jit/_recursive.py:1003: in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1428: in script
ret = _script_impl(
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1204: in _script_impl
fn = torch._C._jit_script_compile(
/usr/local/lib/python3.10/dist-packages/torch/jit/_recursive.py:1003: in try_compile_fn
return torch.jit.script(fn, _rcb=rcb)
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1428: in script
ret = _script_impl(
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
obj = <function is_compiling at 0xf103a8e791b0>, optimize = None, _frames_up = 1
_rcb = <function createResolutionCallbackFromEnv.<locals>.<lambda> at 0xf10712e6fc70>
example_inputs = None
def _script_impl(
obj,
optimize=None,
_frames_up=0,
_rcb=None,
example_inputs: Union[List[Tuple], Dict[Callable, List[Tuple]], None] = None,
):
global type_trace_db
if optimize is not None:
warnings.warn(
"`optimize` is deprecated and has no effect. "
"Use `with torch.jit.optimized_execution()` instead",
FutureWarning,
stacklevel=3,
)
# No-op for modules, functions, class instances that are already scripted
if isinstance(obj, RecursiveScriptClass):
return obj
if isinstance(obj, ScriptModule):
return obj
if isinstance(obj, ScriptFunction):
return obj
if example_inputs:
# If MonkeyType is installed, enable profile directed type annotation
# Check if example_inputs are defined and generate call traces
# for the method by running eager mode version of the method with
# the provide example inputs. This logs all the traces in type_trace_db
type_trace_db = JitTypeTraceStore()
if monkeytype_trace:
monkeytype_config = JitTypeTraceConfig(type_trace_db)
with monkeytype_trace(monkeytype_config):
if isinstance(example_inputs, Dict):
# If the obj is an nn.Module or a class, then each method is
# executed with the arguments provided in the example inputs.
# example inputs here will be of type Dict(class.method, (arguments))
# This is used to infer type annotations for those methods
# which are not called directly under the hood of monkeytype.
for module, example_input in example_inputs.items():
for example in example_input:
module(*example)
elif isinstance(example_inputs, List):
for examples in example_inputs:
obj(*examples)
else:
raise ValueError(
"Error: Unable to infer types. Please format the inputs to type `List[Tuple]`"
" or `Dict[Callable, List[Tuple]]` to be run with MonkeyType."
)
else:
warnings.warn(
"Warning: monkeytype is not installed. Please install https://github.com/Instagram/MonkeyType "
"to enable Profile-Directed Typing in TorchScript. Refer to "
"https://github.com/Instagram/MonkeyType/blob/master/README.rst to install MonkeyType. "
)
if isinstance(obj, torch.nn.Module):
obj = call_prepare_scriptable_func(obj)
return torch.jit._recursive.create_script_module(
obj, torch.jit._recursive.infer_methods_to_compile
)
else:
obj = obj.__prepare_scriptable__() if hasattr(obj, "__prepare_scriptable__") else obj # type: ignore[operator]
if isinstance(obj, dict):
return create_script_dict(obj)
if isinstance(obj, list):
return create_script_list(obj)
if inspect.isclass(obj):
qualified_name = _qualified_name(obj)
# If this type is a `nn.Module` subclass, they probably meant to pass
# an instance instead of a Module
if issubclass(obj, torch.nn.Module):
raise RuntimeError(
f"Type '{obj}' cannot be compiled since it inherits from nn.Module, pass an instance instead"
)
# Enums are automatically usable in TorchScript, explicitly scripting
# is not necessary, but not harmful either.
if issubclass(obj, enum.Enum):
return obj
if not _is_new_style_class(obj):
raise RuntimeError(
"TorchScript classes must be new-style classes. "
"Please inherit from 'object'."
)
if len(obj.mro()) > 2:
raise RuntimeError(
"TorchScript classes does not support inheritance yet. "
"Please directly inherit from 'object'."
)
if _rcb is None:
_rcb = _jit_internal.createResolutionCallbackFromFrame(_frames_up + 1)
_compile_and_register_class(obj, _rcb, qualified_name)
return obj
elif inspect.isfunction(obj) or inspect.ismethod(obj):
qualified_name = _qualified_name(obj)
# this is a decorated fn, and we need to the underlying fn and its rcb
if hasattr(obj, "__script_if_tracing_wrapper"):
obj = obj.__original_fn # type: ignore[union-attr]
_rcb = _jit_internal.createResolutionCallbackFromClosure(obj)
# some functions are explicitly marked as not supported in script mode
if hasattr(obj, "__script_unsupported"):
raise RuntimeError("TorchScript error: " + obj.__script_unsupported)
_check_directly_compile_overloaded(obj)
maybe_already_compiled_fn = _try_get_jit_cached_function(obj)
if maybe_already_compiled_fn:
maybe_already_compiled_fn._torchdynamo_inline = obj # type: ignore[attr-defined]
return maybe_already_compiled_fn
ast = get_jit_def(obj, obj.__name__)
if _rcb is None:
_rcb = _jit_internal.createResolutionCallbackFromClosure(obj)
> fn = torch._C._jit_script_compile(
qualified_name, ast, _rcb, get_default_args(obj)
)
E RuntimeError:
E undefined value torch:
E File "/usr/local/lib/python3.10/dist-packages/typing_extensions.py", line 34
E It will depend on the context where to use what.
E """
E return torch.compiler.is_compiling()
E ~~~~~ <--- HERE
E 'is_compiling' is being compiled since it was called from 'is_compiling'
E File "/usr/local/lib/python3.10/dist-packages/torch_geometric/_compile.py", line 14
E """
E if torch_geometric.typing.WITH_PT21:
E return torch._dynamo.is_compiling()
E ~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
E return False # pragma: no cover
E 'is_compiling' is being compiled since it was called from 'index_sort'
E File "/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/_index_sort.py", line 30
E (default: :obj:`False`)
E """
E if stable or not torch_geometric.typing.WITH_INDEX_SORT or is_compiling():
E ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
E return inputs.sort(stable=stable)
E return pyg_lib.ops.index_sort(inputs, max_value=max_value)
E 'index_sort' is being compiled since it was called from 'coalesce'
E File "/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/_coalesce.py", line 147
E
E if not is_sorted:
E idx[1:], perm = index_sort(idx[1:], max_value=num_nodes * num_nodes)
E ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
E if isinstance(edge_index, Tensor):
E edge_index = edge_index[:, perm]
E 'coalesce' is being compiled since it was called from 'to_undirected'
E File "/usr/local/lib/python3.10/dist-packages/torch_geometric/utils/undirected.py", line 209
E edge_attr = [torch.cat([e, e], dim=0) for e in edge_attr]
E
E return coalesce(edge_index, edge_attr, num_nodes, reduce)
E ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
E 'to_undirected' is being compiled since it was called from 'jit'
E File "/opt/pyg/pytorch_geometric/test/utils/test_undirected.py", line 38
E @torch.jit.script
E def jit(edge_index: Tensor) -> Tensor:
E return to_undirected(edge_index)
E ~~~~~~~~~~~~~~~~~~~~~~~~ <--- HERE
/usr/local/lib/python3.10/dist-packages/torch/jit/_script.py:1204: RuntimeError
=============================== warnings summary ===============================
../../../usr/local/lib/python3.10/dist-packages/torch_geometric/_compile.py:14: 2 warnings
test/contrib/nn/models/test_rbcd_attack.py: 36 warnings
test/data/test_batch.py: 3 warnings
test/data/test_data.py: 2 warnings
test/data/test_datapipes.py: 1 warning
test/data/test_dataset_summary.py: 5 warnings
test/data/test_graph_store.py: 1 warning
test/data/test_hypergraph_data.py: 1 warning
test/datasets/graph_generator/test_ba_graph.py: 1 warning
test/datasets/graph_generator/test_er_graph.py: 1 warning
test/datasets/graph_generator/test_grid_graph.py: 1 warning
test/datasets/graph_generator/test_tree_graph.py: 1 warning
test/datasets/test_ba_shapes.py: 1 warning
test/datasets/test_bzr.py: 1 warning
test/datasets/test_enzymes.py: 2 warnings
test/datasets/test_explainer_dataset.py: 3 warnings
test/datasets/test_fake.py: 36 warnings
test/datasets/test_imdb_binary.py: 1 warning
test/datasets/test_infection_dataset.py: 2 warnings
test/datasets/test_mutag.py: 1 warning
test/datasets/test_planetoid.py: 1 warning
test/datasets/test_snap_dataset.py: 12 warnings
test/distributed/test_local_graph_store.py: 1 warning
test/explain/algorithm/test_attention_explainer.py: 4 warnings
test/explain/algorithm/test_captum.py: 13 warnings
test/explain/algorithm/test_gnn_explainer.py: 866 warnings
test/explain/algorithm/test_graphmask_explainer.py: 648 warnings
test/explain/algorithm/test_pg_explainer.py: 12 warnings
test/loader/test_cache.py: 4 warnings
test/loader/test_imbalanced_sampler.py: 3 warnings
test/loader/test_link_neighbor_loader.py: 41 warnings
test/loader/test_neighbor_loader.py: 44 warnings
test/loader/test_zip_loader.py: 2 warnings
test/nn/aggr/test_attention.py: 2 warnings
test/nn/aggr/test_basic.py: 5 warnings
test/nn/aggr/test_fused.py: 7 warnings
test/nn/aggr/test_multi.py: 10 warnings
test/nn/aggr/test_scaler.py: 2 warnings
test/nn/aggr/test_set2set.py: 1 warning
test/nn/conv/cugraph/test_cugraph_gat_conv.py: 48 warnings
test/nn/conv/cugraph/test_cugraph_rgcn_conv.py: 144 warnings
test/nn/conv/cugraph/test_cugraph_sage_conv.py: 128 warnings
test/nn/conv/test_agnn_conv.py: 2 warnings
test/nn/conv/test_antisymmetric_conv.py: 1 warning
test/nn/conv/test_appnp.py: 2 warnings
test/nn/conv/test_arma_conv.py: 2 warnings
test/nn/conv/test_cg_conv.py: 3 warnings
test/nn/conv/test_cheb_conv.py: 2 warnings
test/nn/conv/test_cluster_gcn_conv.py: 1 warning
test/nn/conv/test_create_gnn.py: 1 warning
test/nn/conv/test_dir_gnn_conv.py: 2 warnings
test/nn/conv/test_dna_conv.py: 2 warnings
test/nn/conv/test_edge_conv.py: 1 warning
test/nn/conv/test_eg_conv.py: 5 warnings
test/nn/conv/test_fa_conv.py: 1 warning
test/nn/conv/test_feast_conv.py: 1 warning
test/nn/conv/test_film_conv.py: 1 warning
test/nn/conv/test_fused_gat_conv.py: 1 warning
test/nn/conv/test_gat_conv.py: 5 warnings
test/nn/conv/test_gated_graph_conv.py: 1 warning
test/nn/conv/test_gatv2_conv.py: 3 warnings
test/nn/conv/test_gcn2_conv.py: 1 warning
test/nn/conv/test_gcn_conv.py: 9 warnings
test/nn/conv/test_gen_conv.py: 3 warnings
test/nn/conv/test_general_conv.py: 8 warnings
test/nn/conv/test_gin_conv.py: 5 warnings
test/nn/conv/test_gmm_conv.py: 4 warnings
test/nn/conv/test_gps_conv.py: 6 warnings
test/nn/conv/test_graph_conv.py: 2 warnings
test/nn/conv/test_han_conv.py: 3 warnings
test/nn/conv/test_heat_conv.py: 2 warnings
test/nn/conv/test_hetero_conv.py: 11 warnings
test/nn/conv/test_hgt_conv.py: 7 warnings
test/nn/conv/test_hypergraph_conv.py: 2 warnings
test/nn/conv/test_le_conv.py: 1 warning
test/nn/conv/test_lg_conv.py: 1 warning
test/nn/conv/test_message_passing.py: 36 warnings
test/nn/conv/test_mf_conv.py: 1 warning
test/nn/conv/test_mixhop_conv.py: 1 warning
test/nn/conv/test_nn_conv.py: 2 warnings
test/nn/conv/test_pdn_conv.py: 2 warnings
test/nn/conv/test_pna_conv.py: 3 warnings
test/nn/conv/test_point_conv.py: 1 warning
test/nn/conv/test_point_gnn_conv.py: 1 warning
test/nn/conv/test_point_transformer_conv.py: 1 warning
test/nn/conv/test_ppf_conv.py: 1 warning
test/nn/conv/test_res_gated_graph_conv.py: 2 warnings
test/nn/conv/test_rgat_conv.py: 65 warnings
test/nn/conv/test_rgcn_conv.py: 18 warnings
test/nn/conv/test_sage_conv.py: 22 warnings
test/nn/conv/test_sg_conv.py: 1 warning
test/nn/conv/test_signed_conv.py: 1 warning
test/nn/conv/test_simple_conv.py: 4 warnings
test/nn/conv/test_ssg_conv.py: 1 warning
test/nn/conv/test_static_graph.py: 1 warning
test/nn/conv/test_supergat_conv.py: 2 warnings
test/nn/conv/test_tag_conv.py: 2 warnings
test/nn/conv/test_transformer_conv.py: 4 warnings
test/nn/conv/test_wl_conv.py: 1 warning
test/nn/conv/test_wl_conv_continuous.py: 1 warning
test/nn/dense/test_dense_gat_conv.py: 4 warnings
test/nn/dense/test_dense_gcn_conv.py: 1 warning
test/nn/dense/test_dense_gin_conv.py: 1 warning
test/nn/dense/test_dense_graph_conv.py: 6 warnings
test/nn/dense/test_dense_sage_conv.py: 1 warning
test/nn/dense/test_linear.py: 14 warnings
test/nn/models/test_attentive_fp.py: 1 warning
test/nn/models/test_basic_gnn.py: 1821 warnings
test/nn/models/test_correct_and_smooth.py: 1 warning
test/nn/models/test_deep_graph_infomax.py: 2 warnings
test/nn/models/test_deepgcn.py: 8 warnings
test/nn/models/test_graph_unet.py: 1 warning
test/nn/models/test_label_prop.py: 1 warning
test/nn/models/test_lightgcn.py: 36 warnings
test/nn/models/test_linkx.py: 2 warnings
test/nn/models/test_metapath2vec.py: 3 warnings
test/nn/models/test_neural_fingerprint.py: 2 warnings
test/nn/models/test_node2vec.py: 2 warnings
test/nn/models/test_pmlp.py: 1 warning
test/nn/models/test_rect.py: 1 warning
test/nn/models/test_rev_gnn.py: 20 warnings
test/nn/models/test_signed_gcn.py: 2 warnings
test/nn/models/test_tgn.py: 2 warnings
test/nn/pool/select/test_select_topk.py: 1 warning
test/nn/pool/test_asap.py: 1 warning
test/nn/pool/test_avg_pool.py: 1 warning
test/nn/pool/test_edge_pool.py: 2 warnings
test/nn/pool/test_glob.py: 2 warnings
test/nn/pool/test_max_pool.py: 3 warnings
test/nn/pool/test_sag_pool.py: 1 warning
test/nn/pool/test_topk_pool.py: 1 warning
test/nn/test_compile_basic.py: 2 warnings
test/nn/test_compile_conv.py: 4 warnings
test/nn/test_model_summary.py: 5 warnings
test/nn/test_sequential.py: 4 warnings
test/nn/test_to_hetero_module.py: 3 warnings
test/nn/test_to_hetero_transformer.py: 10 warnings
test/nn/test_to_hetero_with_bases_transformer.py: 5 warnings
test/profile/test_profile.py: 7 warnings
test/profile/test_profiler.py: 2 warnings
test/sampler/test_sampler_base.py: 2 warnings
test/test_edge_index.py: 208 warnings
test/test_warnings.py: 1 warning
test/transforms/test_add_metapaths.py: 4 warnings
test/transforms/test_face_to_edge.py: 1 warning
test/transforms/test_feature_propagation.py: 1 warning
test/transforms/test_gdc.py: 2 warnings
test/transforms/test_line_graph.py: 1 warning
test/transforms/test_local_cartesian.py: 1 warning
test/transforms/test_local_degree_profile.py: 1 warning
test/transforms/test_node_property_split.py: 3 warnings
test/transforms/test_pad.py: 34 warnings
test/transforms/test_random_link_split.py: 3 warnings
test/transforms/test_remove_duplicated_edges.py: 1 warning
test/transforms/test_rooted_subgraph.py: 2 warnings
test/transforms/test_sign.py: 1 warning
test/transforms/test_to_sparse_tensor.py: 8 warnings
test/transforms/test_to_undirected.py: 3 warnings
test/transforms/test_two_hop.py: 1 warning
test/utils/test_assortativity.py: 1 warning
test/utils/test_augmentation.py: 1 warning
test/utils/test_coalesce.py: 2 warnings
test/utils/test_convert.py: 18 warnings
test/utils/test_embedding.py: 1 warning
test/utils/test_grid.py: 1 warning
test/utils/test_loop.py: 3 warnings
test/utils/test_mesh_laplacian.py: 2 warnings
test/utils/test_negative_sampling.py: 3 warnings
test/utils/test_num_nodes.py: 1 warning
test/utils/test_ppr.py: 2 warnings
test/utils/test_random.py: 3 warnings
test/utils/test_scatter.py: 6 warnings
test/utils/test_softmax.py: 3 warnings
test/utils/test_sort_edge_index.py: 1 warning
test/utils/test_sparse.py: 22 warnings
test/utils/test_spmm.py: 2 warnings
test/utils/test_train_test_split_edges.py: 1 warning
test/utils/test_tree_decomposition.py: 2 warnings
test/utils/test_trim_to_layer.py: 1 warning
test/utils/test_undirected.py: 2 warnings
test/visualization/test_influence.py: 1 warning
/usr/local/lib/python3.10/dist-packages/torch_geometric/_compile.py:14: FutureWarning: `torch._dynamo.external_utils.is_compiling` is deprecated. Use `torch.compiler.is_compiling` instead.
return torch._dynamo.is_compiling()
../../../usr/local/lib/python3.10/dist-packages/torch_geometric/graphgym/imports.py:14
/usr/local/lib/python3.10/dist-packages/torch_geometric/graphgym/imports.py:14: UserWarning: Please install 'pytorch_lightning' via 'pip install pytorch_lightning' in order to use GraphGym
warnings.warn("Please install 'pytorch_lightning' via "
test/data/test_batch.py::test_pickling
/opt/pyg/pytorch_geometric/test/data/test_batch.py:333: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
batch = torch.load(path)
test/data/test_dataset.py: 4 warnings
test/datasets/test_bzr.py: 2 warnings
test/datasets/test_elliptic.py: 1 warning
test/datasets/test_enzymes.py: 3 warnings
test/datasets/test_imdb_binary.py: 1 warning
test/datasets/test_mutag.py: 2 warnings
test/datasets/test_planetoid.py: 3 warnings
test/datasets/test_snap_dataset.py: 3 warnings
test/datasets/test_suite_sparse.py: 2 warnings
test/io/test_fs.py: 2 warnings
test/nn/models/test_re_net.py: 1 warning
test/transforms/test_random_link_split.py: 1 warning
/usr/local/lib/python3.10/dist-packages/torch_geometric/io/fs.py:215: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
return torch.load(f, map_location)
test/loader/test_prefetch.py: 10 warnings
/usr/local/lib/python3.10/dist-packages/torch_geometric/loader/prefetch.py:76: DeprecationWarning: The argument 'device' of Tensor.pin_memory() is deprecated. Please do not pass this argument. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/Memory.cpp:46.)
batch = batch.pin_memory(self.device_helper.device)
test/loader/test_prefetch.py: 10 warnings
/usr/local/lib/python3.10/dist-packages/torch_geometric/loader/prefetch.py:76: DeprecationWarning: The argument 'device' of Tensor.is_pinned() is deprecated. Please do not pass this argument. (Triggered internally at /opt/pytorch/pytorch/aten/src/ATen/native/Memory.cpp:31.)
batch = batch.pin_memory(self.device_helper.device)
test/nn/conv/cugraph/test_cugraph_gat_conv.py: 24 warnings
test/nn/conv/cugraph/test_cugraph_rgcn_conv.py: 72 warnings
test/nn/conv/cugraph/test_cugraph_sage_conv.py: 64 warnings
/usr/local/lib/python3.10/dist-packages/pylibcugraphops/pytorch/graph.py:71: UserWarning: dst_max_in_degree currently has no effect
warnings.warn("dst_max_in_degree currently has no effect")
test/nn/conv/test_message_passing.py::test_my_conv_save
/opt/pyg/pytorch_geometric/test/nn/conv/test_message_passing.py:142: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
conv = torch.load(path)
test/nn/conv/test_message_passing.py::test_pickle
/opt/pyg/pytorch_geometric/test/nn/conv/test_message_passing.py:741: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model = torch.load(path)
test/nn/conv/test_rgcn_conv.py: 12 warnings
/usr/local/lib/python3.10/dist-packages/torch/jit/_check.py:178: UserWarning: The TorchScript type system doesn't support instance-level annotations on empty non-base types in `__init__`. Instead, either 1) use a type annotation in the class body, or 2) wrap the type in `torch.jit.Attribute`.
warnings.warn(
test/nn/models/test_basic_gnn.py::test_packaging
/opt/pyg/pytorch_geometric/test/nn/models/test_basic_gnn.py:238: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
model = torch.load(path)
test/nn/nlp/test_sentence_transformer.py: 12 warnings
/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
warnings.warn(
test/nn/nlp/test_sentence_transformer.py: 12 warnings
/usr/local/lib/python3.10/dist-packages/transformers/modeling_attn_mask_utils.py:445: FutureWarning: `torch._dynamo.external_utils.is_compiling` is deprecated. Use `torch.compiler.is_compiling` instead.
or (hasattr(torch, "_dynamo") and torch._dynamo.is_compiling())
test/nn/test_model_hub.py::test_from_pretrained
/usr/local/lib/python3.10/dist-packages/torch_geometric/nn/model_hub.py:178: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
state_dict = torch.load(model_file, map_location=map_location)
test/profile/test_profiler.py::test_profiler[cpu]
test/profile/test_profiler.py::test_profiler[cuda:0]
/usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:342: FutureWarning: `self_cuda_memory_usage` is deprecated. Use `self_device_memory_usage` instead.
hasattr(e, "self_cuda_memory_usage") for e in events)
test/profile/test_profiler.py::test_profiler[cpu]
test/profile/test_profiler.py::test_profiler[cuda:0]
/usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:345: FutureWarning: `self_cuda_memory_usage` is deprecated. Use `self_device_memory_usage` instead.
[getattr(e, "self_cuda_memory_usage", 0) or 0 for e in events])
test/profile/test_profiler.py::test_profiler[cpu]
test/profile/test_profiler.py::test_profiler[cuda:0]
/usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:355: FutureWarning: `self_cuda_time_total` is deprecated. Use `self_device_time_total` instead.
hasattr(e, "self_cuda_time_total") for e in events)
test/profile/test_profiler.py::test_profiler[cpu]
test/profile/test_profiler.py::test_profiler[cuda:0]
/usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:358: FutureWarning: `self_cuda_time_total` is deprecated. Use `self_device_time_total` instead.
[getattr(e, "self_cuda_time_total", 0) or 0 for e in events])
test/profile/test_profiler.py::test_profiler[cpu]
test/profile/test_profiler.py::test_profiler[cuda:0]
/usr/local/lib/python3.10/dist-packages/torch_geometric/profile/profiler.py:364: FutureWarning: `cuda_time_total` is deprecated. Use `device_time_total` instead.
cuda_total=sum([e.cuda_time_total or 0 for e in events]),
test/test_edge_index.py::test_save_and_load[int64-cpu]
test/test_edge_index.py::test_save_and_load[int64-cuda:0]
test/test_edge_index.py::test_save_and_load[int32-cpu]
test/test_edge_index.py::test_save_and_load[int32-cuda:0]
/opt/pyg/pytorch_geometric/test/test_edge_index.py:1259: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
out = torch.load(path)
test/test_index.py::test_save_and_load[int64-cpu]
test/test_index.py::test_save_and_load[int64-cuda:0]
test/test_index.py::test_save_and_load[int32-cpu]
test/test_index.py::test_save_and_load[int32-cuda:0]
/opt/pyg/pytorch_geometric/test/test_index.py:532: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
out = torch.load(path)
test/utils/test_convert.py: 16 warnings
/usr/local/lib/python3.10/dist-packages/cugraph/structure/symmetrize.py:92: FutureWarning: Multi is deprecated and the removal of multi edges will no longer be supported from 'symmetrize'. Multi edges will be removed upon creation of graph instance.
warnings.warn(
test/utils/test_scatter.py::test_scatter_backward[min-cuda:0]
/usr/local/lib/python3.10/dist-packages/torch_geometric/warnings.py:11: UserWarning: The usage of `scatter(reduce='min')` can be accelerated via the 'torch-scatter' package, but it was not found
warnings.warn(message)
test/utils/test_scatter.py::test_scatter_backward[max-cuda:0]
/usr/local/lib/python3.10/dist-packages/torch_geometric/warnings.py:11: UserWarning: The usage of `scatter(reduce='max')` can be accelerated via the 'torch-scatter' package, but it was not found
warnings.warn(message)
test/utils/test_sparse.py::test_to_torch_coo_tensor_save_load
/opt/pyg/pytorch_geometric/test/utils/test_sparse.py:227: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.
adj = torch.load(path)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
---------- coverage: platform linux, python 3.10.12-final-0 ----------
Coverage XML written to file coverage.xml
=========================== short test summary info ============================
FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs0] - RuntimeError:
FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs3] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs4] - RuntimeError:
FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs5] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_fused.py::test_fused_aggregation[aggrs6] - RuntimeError:
FAILED test/nn/aggr/test_gmt.py::test_graph_multiset_transformer - RuntimeError:
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple0] - RuntimeError:
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple3] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple4] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple5] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple6] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple7] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple8] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_multi.py::test_multi_aggr[multi_aggr_tuple9] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_scaler.py::test_degree_scaler_aggregation[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_scaler.py::test_degree_scaler_aggregation[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/aggr/test_set_transformer.py::test_set_transformer_aggregation - RuntimeError:
FAILED test/nn/conv/test_agnn_conv.py::test_agnn_conv[True] - RuntimeError:
FAILED test/nn/conv/test_agnn_conv.py::test_agnn_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_appnp.py::test_appnp - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_arma_conv.py::test_arma_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_arma_conv.py::test_lazy_arma_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_cg_conv.py::test_cg_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_cg_conv.py::test_cg_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_cg_conv.py::test_cg_conv_with_edge_features - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_cheb_conv.py::test_cheb_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_cluster_gcn_conv.py::test_cluster_gcn_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_dna_conv.py::test_dna_conv[3-32] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_dna_conv.py::test_dna_conv_sparse_tensor[3-32] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_edge_conv.py::test_edge_conv_conv - RuntimeError:
FAILED test/nn/conv/test_eg_conv.py::test_eg_conv[True-aggregators0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_eg_conv.py::test_eg_conv[True-aggregators1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_eg_conv.py::test_eg_conv[False-aggregators0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_eg_conv.py::test_eg_conv[False-aggregators1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_fa_conv.py::test_fa_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_feast_conv.py::test_feast_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_film_conv.py::test_film_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gat_conv.py::test_gat_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gat_conv.py::test_gat_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gated_graph_conv.py::test_gated_graph_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gatv2_conv.py::test_gatv2_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gatv2_conv.py::test_gatv2_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gcn2_conv.py::test_gcn2_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gcn_conv.py::test_gcn_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gcn_conv.py::test_gcn_conv_with_decomposed_layers - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gen_conv.py::test_gen_conv[softmax] - RuntimeError:
FAILED test/nn/conv/test_gen_conv.py::test_gen_conv[powermean] - RuntimeError:
FAILED test/nn/conv/test_gen_conv.py::test_gen_conv[aggr2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gin_conv.py::test_gin_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gin_conv.py::test_gine_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gmm_conv.py::test_gmm_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_gmm_conv.py::test_gmm_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_graph_conv.py::test_graph_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_heat_conv.py::test_heat_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_heat_conv.py::test_heat_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_le_conv.py::test_le_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_lg_conv.py::test_lg_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_my_commented_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_my_kwargs_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_my_conv_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_my_conv_jit_save - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_my_multiple_aggr_conv_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_my_edge_conv_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_my_default_arg_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_tuple_output_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_explain_message - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_traceable_my_conv_with_self_loops[4] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_traceable_my_conv_with_self_loops[8] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_traceable_my_conv_with_self_loops[2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_traceable_my_conv_with_self_loops[0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_message_passing.py::test_pickle - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_mf_conv.py::test_mf_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_mixhop_conv.py::test_mixhop_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_nn_conv.py::test_nn_conv[cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_nn_conv.py::test_nn_conv[cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_pdn_conv.py::test_pdn_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_pdn_conv.py::test_pdn_conv_with_sparse_node_input_feature - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_pna_conv.py::test_pna_conv[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_pna_conv.py::test_pna_conv[False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_point_conv.py::test_point_net_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_point_gnn_conv.py::test_point_gnn_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_point_transformer_conv.py::test_point_transformer_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_ppf_conv.py::test_ppf_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_res_gated_graph_conv.py::test_res_gated_graph_conv[None] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_res_gated_graph_conv.py::test_res_gated_graph_conv[4] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgat_conv.py::test_rgat_conv_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf0-RGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf0-RGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf0-FastRGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf0-FastRGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf1-RGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf1-RGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf1-FastRGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf1-FastRGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf2-RGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf2-RGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf2-FastRGCNConv-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_rgcn_conv.py::test_rgcn_conv_basic[conf2-FastRGCNConv-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_sage_conv.py::test_sage_conv[mean-False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_sage_conv.py::test_sage_conv[mean-True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_sage_conv.py::test_sage_conv[sum-False] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_sage_conv.py::test_sage_conv[sum-True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_sg_conv.py::test_sg_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_signed_conv.py::test_signed_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_simple_conv.py::test_simple_conv[mean-None] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_simple_conv.py::test_simple_conv[sum-sum] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_simple_conv.py::test_simple_conv[aggr2-cat] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_simple_conv.py::test_simple_conv[mean-self_loop] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_ssg_conv.py::test_ssg_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_tag_conv.py::test_tag_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/conv/test_wl_conv_continuous.py::test_wl_conv - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/dense/test_linear.py::test_hetero_linear_basic[cpu] - RuntimeError:
FAILED test/nn/dense/test_linear.py::test_hetero_linear_basic[cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/dense/test_linear.py::test_hetero_dict_linear_jit - RuntimeError:
FAILED test/nn/models/test_attentive_fp.py::test_attentive_fp - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/models/test_basic_gnn.py::test_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/models/test_linkx.py::test_linkx[1] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/models/test_linkx.py::test_linkx[2] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/models/test_meta.py::test_meta_layer_example - RuntimeError:
FAILED test/nn/models/test_rect.py::test_rect - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/norm/test_graph_norm.py::test_graph_norm - RuntimeError:
FAILED test/nn/norm/test_instance_norm.py::test_instance_norm[True] - RuntimeError:
FAILED test/nn/norm/test_instance_norm.py::test_instance_norm[False] - RuntimeError:
FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[graph-True-cpu] - RuntimeError:
FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[graph-True-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[graph-False-cpu] - RuntimeError:
FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[graph-False-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[node-True-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[node-True-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[node-False-cpu] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/norm/test_layer_norm.py::test_layer_norm[node-False-cuda:0] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/norm/test_mean_subtraction_norm.py::test_mean_subtraction_norm - RuntimeError:
FAILED test/nn/norm/test_pair_norm.py::test_pair_norm[False] - RuntimeError:
FAILED test/nn/norm/test_pair_norm.py::test_pair_norm[True] - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/pool/select/test_select_topk.py::test_topk_ratio - RuntimeError:
FAILED test/nn/pool/test_asap.py::test_asap - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/pool/test_asap.py::test_asap_jit_save - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/pool/test_avg_pool.py::test_avg_pool_x - RuntimeError:
FAILED test/nn/pool/test_edge_pool.py::test_compute_edge_score_softmax - RuntimeError:
FAILED test/nn/pool/test_edge_pool.py::test_edge_pooling - RuntimeError:
FAILED test/nn/pool/test_max_pool.py::test_max_pool_x - RuntimeError:
FAILED test/nn/pool/test_sag_pool.py::test_sag_pooling - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/nn/pool/test_topk_pool.py::test_topk_pooling - RuntimeError:
FAILED test/nn/test_sequential.py::test_sequential_jit - RuntimeError: Can't redefine method: forward on class: __torch__.torch_geom...
FAILED test/test_edge_index.py::test_torch_script - AssertionError: Regex pattern did not match.
FAILED test/utils/test_coalesce.py::test_coalesce_jit - RuntimeError:
FAILED test/utils/test_grid.py::test_grid - RuntimeError:
FAILED test/utils/test_isolated.py::test_contains_isolated_nodes - RuntimeError:
FAILED test/utils/test_laplacian.py::test_get_laplacian - RuntimeError:
FAILED test/utils/test_softmax.py::test_softmax - RuntimeError:
FAILED test/utils/test_sort_edge_index.py::test_sort_edge_index_jit - RuntimeError:
FAILED test/utils/test_sparse.py::test_to_torch_coo_tensor - RuntimeError:
FAILED test/utils/test_spmm.py::test_spmm_jit[sum] - RuntimeError:
FAILED test/utils/test_spmm.py::test_spmm_jit[mean] - RuntimeError:
FAILED test/utils/test_to_dense_adj.py::test_to_dense_adj - RuntimeError:
FAILED test/utils/test_to_dense_batch.py::test_to_dense_batch_jit - RuntimeError:
FAILED test/utils/test_undirected.py::test_is_undirected - RuntimeError:
FAILED test/utils/test_undirected.py::test_to_undirected - RuntimeError:
```
### Versions
latest | closed | 2024-08-16T20:30:13Z | 2024-08-27T19:58:21Z | https://github.com/pyg-team/pytorch_geometric/issues/9600 | [
"bug"
] | puririshi98 | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | computer-vision | 1,542 | Working with High Resolution Images | Hi, I want you to give some advice about the image load size as much as possible.
How will I know how much to reduce the size of the picture I will give to my model?
I don't know how this will affect my model. Is there any mathematical structure to understand this method?
Here is the section that you have mentioned in tips.md:
Training/Testing with high res images
CycleGAN is quite memory-intensive as four networks (two generators and two discriminators) need to be loaded on one GPU, so a large image cannot be entirely loaded. In this case, we recommend training with cropped images. For example, to generate 1024px results, you can train with --preprocess scale_width_and_crop --load_size 1024 --crop_size 360, and test with --preprocess scale_width --load_size 1024. This way makes sure the training and test will be at the same scale. At test time, you can afford higher resolution because you don’t need to load all networks.
Thanks in advance! | open | 2023-02-10T22:26:21Z | 2023-02-14T21:59:08Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1542 | [] | AlicanAKCA | 1 |
bmoscon/cryptofeed | asyncio | 122 | module 'asyncio' has no attribute 'run' (In python3.6) | In setup.py, python version can be 3.6/3.7,but it raised exception when i run examples/demo_tcp.py used python=3.6.5.
```
File "demo_tcp.py", line 54, in <module>
asyncio.run(main())
AttributeError: module 'asyncio' has no attribute 'run'
```
Actually, asyncio.run is a Python 3.7 addition | closed | 2019-07-25T05:11:05Z | 2019-09-02T19:50:59Z | https://github.com/bmoscon/cryptofeed/issues/122 | [
"good first issue"
] | malone6 | 2 |
dagster-io/dagster | data-science | 28,530 | [dagster-components] `AssetSpecModel` does not resolve dep asset keys | ### What's the issue?
When using `AssetSpecModel` in a component, the resolution of dependencies with multi-part asset keys does not resolve them, and keeps them as a string.
Given the `component.yaml`:
```yaml
type: dagster_components.dagster.PipesSubprocessScriptCollectionComponent
attributes:
scripts:
- path: "my_script.py"
assets:
- key: "kitchen_sink"
deps:
- "prefixed/upstream"
```
Dagster will throw:
> ERROR:dagster.code_server:dagster._core.errors.DagsterInvalidDefinitionError: "my/upstream" is not a valid name in Dagster. Names must be in regex ^[A-Za-z0-9_]+$.
### What did you expect to happen?
_No response_
### How to reproduce?
_No response_
### Dagster version
dagster, version 1.10.5
### Deployment type
None
### Deployment details
_No response_
### Additional information
_No response_
### Message from the maintainers
Impacted by this issue? Give it a 👍! We factor engagement into prioritization. | closed | 2025-03-16T08:48:43Z | 2025-03-18T15:01:13Z | https://github.com/dagster-io/dagster/issues/28530 | [
"type: bug",
"area: dagster-components"
] | stevenayers | 0 |
InstaPy/InstaPy | automation | 5,847 | Instapy get blocked instantly | Hi guys
I was using Instapy for like two days and then I get blocked by Instagram. Is there a way to avoid this? I searched for solutions but I din't found one. | closed | 2020-10-26T12:03:03Z | 2020-12-20T14:06:22Z | https://github.com/InstaPy/InstaPy/issues/5847 | [
"wontfix"
] | Atumos | 12 |
streamlit/streamlit | data-visualization | 10,383 | Make st.toast appear/bring it to the front (stack order) when used in st.dialog | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
Not sure to place this as a feature request or bug but it seems when using st.toast inside st.dialog, the dialog is sent to the background of the dialog.
### Reproducible Code Example
```Python
import streamlit as st
st.dialog(title="Streamlit Toast Notification")
def toast_notification():
activate_toast = st.button(label="send toast")
if activate_toast:
st.toast("Hi, I am in the background!")
toast_notification()
```
### Steps To Reproduce
1. Create dialog
2. Click button to show toast
### Expected Behavior
st.toast should be stacked at the front of the dialog.
### Current Behavior
Stacks behind st.dialog.
### Is this a regression?
- [ ] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: 1.42.0
- Python version: 3.10
- Operating System: Windows
- Browser: Chrome
### Additional Information
_No response_ | open | 2025-02-12T20:19:16Z | 2025-02-13T12:10:54Z | https://github.com/streamlit/streamlit/issues/10383 | [
"type:enhancement",
"feature:st.toast",
"feature:st.dialog"
] | Socvest | 4 |
explosion/spaCy | machine-learning | 13,468 | ⚠ Aborting and saving the final best model. Encountered exception: RuntimeError('Invalid argument') RuntimeError: Invalid argument | I had a problem when I used the GPU provided by kaggle to train my Chinese information extraction model, I used the config file generated by the config file generation method of the spacy official website.Your help is greatly appreciated
Some of my environmental information is as follows, if you need to provide others, please leave a message and try your best to provide you
!nvidia-smi
```
NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================|
| 0 Tesla P100-PCIE-16GB Off | 00000000:00:04.0 Off | 0 |
| N/A 34C P0 26W / 250W | 0MiB / 16384MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|==================================================================|
| No running processes found |
+---------------------------------------------------------------------------------------+
```
```
!python -V
Python 3.10.13
!python -m spacy info
============================== Info about spaCy ==============================
spaCy version 3.7.4
Location /opt/conda/lib/python3.10/site-packages/spacy
Platform Linux-5.15.133+-x86_64-with-glibc2.31
Python version 3.10.13
Pipelines zh_core_web_lg (3.7.0), en_core_web_sm (3.7.1), en_core_web_lg (3.7.1)
```
There are too many python packages for easy display, so we will provide them to you if necessary
Execute the command when an error occurs
```
!python -m spacy project run all
Misinformation in its entirety
ℹ Running workflow 'all'
================================== convert ==================================
ℹ Skipping 'convert': nothing changed
=================================== train ===================================
Running command: /opt/conda/bin/python -m spacy train configs/config.cfg --output training/bid/ --paths.train corpus/train.spacy --paths.dev corpus/dev.spacy --gpu-id 0
ℹ Saving to output directory: training/bid
ℹ Using GPU: 0
=========================== Initializing pipeline ===========================
[2024-04-28 08:10:01,857] [INFO] Set up nlp object from config
[2024-04-28 08:10:01,902] [INFO] Pipeline: ['transformer', 'ner']
[2024-04-28 08:10:01,909] [INFO] Created vocabulary
[2024-04-28 08:10:01,910] [INFO] Finished initializing nlp object
[2024-04-28 08:10:19,274] [INFO] Initialized pipeline components: ['transformer', 'ner']
✔ Initialized pipeline
============================= Training pipeline =============================
ℹ Pipeline: ['transformer', 'ner']
ℹ Initial learn rate: 0.0
E # LOSS TRANS... LOSS NER ENTS_F ENTS_P ENTS_R SCORE
--- ------ ------------- -------- ------ ------ ------ ------
⚠ Aborting and saving the final best model. Encountered exception:
RuntimeError('Invalid argument')
Traceback (most recent call last):
File "/opt/conda/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.10/site-packages/spacy/__main__.py", line 4, in <module>
setup_cli()
File "/opt/conda/lib/python3.10/site-packages/spacy/cli/_util.py", line 87, in setup_cli
command(prog_name=COMMAND)
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
return self.main(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 783, in main
return _main(
File "/opt/conda/lib/python3.10/site-packages/typer/core.py", line 225, in _main
rv = self.invoke(ctx)
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/opt/conda/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/typer/main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
File "/opt/conda/lib/python3.10/site-packages/spacy/cli/train.py", line 54, in train_cli
train(config_path, output_path, use_gpu=use_gpu, overrides=overrides)
File "/opt/conda/lib/python3.10/site-packages/spacy/cli/train.py", line 84, in train
train_nlp(nlp, output_path, use_gpu=use_gpu, stdout=sys.stdout, stderr=sys.stderr)
File "/opt/conda/lib/python3.10/site-packages/spacy/training/loop.py", line 135, in train
raise e
File "/opt/conda/lib/python3.10/site-packages/spacy/training/loop.py", line 118, in train
for batch, info, is_best_checkpoint in training_step_iterator:
File "/opt/conda/lib/python3.10/site-packages/spacy/training/loop.py", line 220, in train_while_improving
nlp.update(
File "/opt/conda/lib/python3.10/site-packages/spacy/language.py", line 1193, in update
proc.update(examples, sgd=None, losses=losses, **component_cfg[name]) # type: ignore
File "/opt/conda/lib/python3.10/site-packages/spacy_transformers/pipeline_component.py", line 294, in update
trf_full, bp_trf_full = self.model.begin_update(docs)
File "/opt/conda/lib/python3.10/site-packages/thinc/model.py", line 328, in begin_update
return self._func(self, X, is_train=True)
File "/opt/conda/lib/python3.10/site-packages/spacy_transformers/layers/transformer_model.py", line 199, in forward
model_output, bp_tensors = transformer(wordpieces, is_train)
File "/opt/conda/lib/python3.10/site-packages/thinc/model.py", line 310, in __call__
return self._func(self, X, is_train=is_train)
File "/opt/conda/lib/python3.10/site-packages/thinc/layers/pytorchwrapper.py", line 225, in forward
Ytorch, torch_backprop = model.shims[0](Xtorch, is_train)
File "/opt/conda/lib/python3.10/site-packages/thinc/shims/pytorch.py", line 95, in __call__
return self.begin_update(inputs)
File "/opt/conda/lib/python3.10/site-packages/thinc/shims/pytorch.py", line 129, in begin_update
output = self._model(*inputs.args, **inputs.kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 1013, in forward
encoder_outputs = self.encoder(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 607, in forward
layer_outputs = layer_module(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 497, in forward
self_attention_outputs = self.attention(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 427, in forward
self_outputs = self.self(
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "/opt/conda/lib/python3.10/site-packages/transformers/models/bert/modeling_bert.py", line 325, in forward
attention_scores = torch.matmul(query_layer, key_layer.transpose(-1, -2))
RuntimeError: Invalid argument
``` | closed | 2024-04-28T12:23:09Z | 2024-05-15T10:56:51Z | https://github.com/explosion/spaCy/issues/13468 | [
"lang / zh",
"training",
"gpu",
"feat / ner",
"feat / transformer"
] | Lance-Owen | 1 |
aminalaee/sqladmin | fastapi | 495 | View all columns | ### Checklist
- [X] There are no similar issues or pull requests for this yet.
### Is your feature related to a problem? Please describe.
Is there a configuration to display all columns of a table? Currently, I am using the following piece of code:
```
class ModelAdmin(ModelView, model=Model):
column_list = [c_attr.key for c_attr in Model.__mapper__.column_attrs]
```
### Describe the solution you would like.
It would be nice to set a configuration optional like `show_all_columns=True` (or something like that) to display all columns:
```
class ModelAdmin(ModelView, model=Model):
show_all_columns = True
```
Thank you! | closed | 2023-05-14T13:54:41Z | 2023-06-06T09:04:42Z | https://github.com/aminalaee/sqladmin/issues/495 | [] | maurosaladino | 2 |
TencentARC/GFPGAN | pytorch | 46 | only paste back from already restored faces | Is it possible to do this without restoring faces again just with 2x esrgan?
like
"python inference_gfpgan.py --upscale 2 --model_path nomodel --test_path results/restored_faces --save_root results/restored_images --paste_back_only"
? | closed | 2021-08-18T16:41:59Z | 2021-08-25T10:20:22Z | https://github.com/TencentARC/GFPGAN/issues/46 | [] | NoUserNameForYou | 6 |
microsoft/qlib | deep-learning | 1,697 | 请求增加baostock日线数据collector | 请求增加baostock日线数据collector
谢谢 | open | 2023-11-22T08:31:12Z | 2023-11-22T08:31:12Z | https://github.com/microsoft/qlib/issues/1697 | [
"enhancement"
] | quant2008 | 0 |
ultrafunkamsterdam/undetected-chromedriver | automation | 1,376 | Makes a random command line window when compiled with pyinstaller | When compiled with pyinstaller, undetected-chromedriver makes a random command prompt window.
It looks like this: https://prnt.sc/mifqvFcQqGAW
| open | 2023-07-01T02:37:53Z | 2023-07-08T17:59:49Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/1376 | [] | lukeprofits | 1 |
smarie/python-pytest-cases | pytest | 179 | Nested parametrize_with_cases does not collect test if fixture is used multiple times | Hello there!
First of all thanks for that package for it is very awesome!
I was working on doing some weird matrix testing to test permissions of user against other users resources. In order to do that I had large cases classes that boiled down to the minimum example below:
```
import pytest_cases as pytest
@pytest.fixture
def db_dep():
return None
class CaseX:
def case_one(self, db_dep):
return 1
def case_two(self, db_dep):
return 2
class CaseY:
@pytest.parametrize_with_cases("x", cases=CaseX)
def case_x_one(self,db_dep,x):
return x, 1
@pytest.parametrize_with_cases("x", cases=CaseX)
def case_x_two(self,db_dep,x):
return x, 1
@pytest.parametrize_with_cases("x,y", cases=CaseY)
def test_nested_parametrize(x, y):
pass
```
I know the example could be simplified, but in my use case, CaseX's case need to have db_access to create a user, then CaseY get this user, and create a new one (so they need db access). Then the test received the matrix of created users.
Here when you run pytest, the following is collected:
```
Test session starts (platform: linux, Python 3.9.1, pytest 6.2.1, pytest-sugar 0.9.4)
rootdir: /home/kexo/Projects/trajaan/backend
plugins: mock-3.5.1, cov-2.10.1, xdist-2.2.0, forked-1.3.0, cases-3.1.1, sugar-0.9.4
collecting ...
<Module test.py>
<Function test_nested_parametrize[x_two-one]>
<Function test_nested_parametrize[x_two-two]>
```
I can manage to get the correct collection only if I remove the fixture dependency on ClassX:
```
import pytest_cases as pytest
@pytest.fixture
def db_dep():
return None
class CaseX:
def case_one(self):
return 1
def case_two(self):
return 2
class CaseY:
@pytest.parametrize_with_cases("x", cases=CaseX)
def case_x_one(self,db_dep,x):
return x, 1
@pytest.parametrize_with_cases("x", cases=CaseX)
def case_x_two(self,db_dep,x):
return x, 1
@pytest.parametrize_with_cases("x,y", cases=CaseY)
def test_nested_parametrize(x, y):
pass
```
```
Test session starts (platform: linux, Python 3.9.1, pytest 6.2.1, pytest-sugar 0.9.4)
rootdir: /home/kexo/Projects/trajaan/backend
plugins: mock-3.5.1, cov-2.10.1, xdist-2.2.0, forked-1.3.0, cases-3.1.1, sugar-0.9.4
collecting ...
<Module test.py>
<Function test_nested_parametrize[x_one-one]>
<Function test_nested_parametrize[x_one-two]>
<Function test_nested_parametrize[x_two-one]>
<Function test_nested_parametrize[x_two-two]>
Results (0.02s):
```
Is it expected ? How can work around that limitation ? I tried defining another fixture as `db_dep` and naming it something else but the limitation is still here:
```
import pytest_cases as pytest
@pytest.fixture
def db_dep():
return None
@pytest.fixture
def db_dep2():
return None
class CaseX:
def case_one(self, db_dep2):
return 1
def case_two(self, db_dep2):
return 2
class CaseY:
@pytest.parametrize_with_cases("x", cases=CaseX)
def case_x_one(self,db_dep,x):
return x, 1
@pytest.parametrize_with_cases("x", cases=CaseX)
def case_x_two(self,db_dep,x):
return x, 1
@pytest.parametrize_with_cases("x,y", cases=CaseY)
def test_nested_parametrize(x, y):
pass
```
```
Test session starts (platform: linux, Python 3.9.1, pytest 6.2.1, pytest-sugar 0.9.4)
rootdir: /home/kexo/Projects/trajaan/backend
plugins: mock-3.5.1, cov-2.10.1, xdist-2.2.0, forked-1.3.0, cases-3.1.1, sugar-0.9.4
collecting ...
<Module test.py>
<Function test_nested_parametrize[x_two-one]>
<Function test_nested_parametrize[x_two-two]>
Results (0.02s):
```
Thanks for having a look! | closed | 2021-01-23T12:38:39Z | 2021-01-25T16:14:29Z | https://github.com/smarie/python-pytest-cases/issues/179 | [] | reyreaud-l | 4 |
gee-community/geemap | jupyter | 527 | Add Planet global monthly/quarterly mosaic | Reference: https://developers.planet.com/quickstart/apis | closed | 2021-06-16T00:08:03Z | 2021-06-16T00:50:50Z | https://github.com/gee-community/geemap/issues/527 | [
"Feature Request"
] | giswqs | 2 |
sqlalchemy/alembic | sqlalchemy | 419 | AttributeError: 'Engine' object has no attribute 'in_transaction' | **Migrated issue, originally created by bretonium ([@bretonium](https://github.com/bretonium))**
Release 0.9.x broke our migrations, they now fail with traceback:
```
breton@breton-pc ~/src/mediagoblin (master*) $ ./bin/gmg dbupdate
WARNING: audiolab is not installed so wav2png will not work
INFO [alembic.runtime.migration] Context impl SQLiteImpl.
INFO [alembic.runtime.migration] Will assume non-transactional DDL.
INFO [alembic.runtime.migration] Running upgrade -> 52bf0ccbedc1, initial revision
Traceback (most recent call last):
File "./bin/gmg", line 11, in <module>
load_entry_point('mediagoblin', 'console_scripts', 'gmg')()
File "/home/breton/src/mediagoblin/mediagoblin/gmg_commands/__init__.py", line 148, in main_cli
args.func(args)
File "/home/breton/src/mediagoblin/mediagoblin/gmg_commands/dbupdate.py", line 234, in dbupdate
run_dbupdate(app_config, global_config)
File "/home/breton/src/mediagoblin/mediagoblin/gmg_commands/dbupdate.py", line 165, in run_dbupdate
run_alembic_migrations(db, app_config, global_config)
File "/home/breton/src/mediagoblin/mediagoblin/gmg_commands/dbupdate.py", line 136, in run_alembic_migrations
return command.upgrade(cfg, 'heads')
File "/home/breton/src/mediagoblin/local/lib/python2.7/site-packages/alembic/command.py", line 254, in upgrade
script.run_env()
File "/home/breton/src/mediagoblin/local/lib/python2.7/site-packages/alembic/script/base.py", line 416, in run_env
util.load_python_file(self.dir, 'env.py')
File "/home/breton/src/mediagoblin/local/lib/python2.7/site-packages/alembic/util/pyfiles.py", line 93, in load_python_file
module = load_module_py(module_id, path)
File "/home/breton/src/mediagoblin/local/lib/python2.7/site-packages/alembic/util/compat.py", line 75, in load_module_py
mod = imp.load_source(module_id, path, fp)
File "/home/breton/src/mediagoblin/mediagoblin/db/migrations/env.py", line 63, in <module>
run_migrations_online()
File "/home/breton/src/mediagoblin/mediagoblin/db/migrations/env.py", line 58, in run_migrations_online
context.run_migrations()
File "<string>", line 8, in run_migrations
File "/home/breton/src/mediagoblin/local/lib/python2.7/site-packages/alembic/runtime/environment.py", line 817, in run_migrations
self.get_context().run_migrations(**kw)
File "/home/breton/src/mediagoblin/local/lib/python2.7/site-packages/alembic/runtime/migration.py", line 330, in run_migrations
self.connection.in_transaction():
AttributeError: 'Engine' object has no attribute 'in_transaction'
```
http://git.savannah.gnu.org/cgit/mediagoblin.git/tree/mediagoblin/db/migrations/env.py#n44 -- code that fails.
| closed | 2017-03-03T21:13:12Z | 2017-03-04T22:15:19Z | https://github.com/sqlalchemy/alembic/issues/419 | [
"bug"
] | sqlalchemy-bot | 4 |
davidsandberg/facenet | computer-vision | 1,061 | A typo in train_softmax.py | It is in line 260:
for key, value in stat.iteritems():
Should it be " for key, value in stat.items(): " ? | closed | 2019-07-29T07:16:49Z | 2019-08-14T04:57:23Z | https://github.com/davidsandberg/facenet/issues/1061 | [] | XuJianxing | 2 |
alteryx/featuretools | scikit-learn | 2,458 | Add AgeToDesignation primitive | The following are the American Medical Associations’ age designations:
- Neonates or newborns (birth to 1 month)
- Infants (1 month to 1 year)
- Children (1 year through 12 years)
- Adolescents (13 years through 17 years. They may also be referred to as teenagers depending on the context.)
- Adults (18 years or older)
- Older adults (65 and older)* | open | 2023-01-20T17:03:05Z | 2023-06-26T19:16:19Z | https://github.com/alteryx/featuretools/issues/2458 | [] | gsheni | 0 |
pyeve/eve | flask | 578 | allow projections on embedded resources | Hi,
I have an embedded list of child objects referenced by ids in the parent object like so
```
parent: {
"children: [ "child1", "child2"]
}
```
The schema for my child object is like so
```
child: {
"a": "x",
"b": "y",
"c": "z"
}
```
I would like a way to retrieve a list of parent records, along with the embedded children, but where each child contains only fields `a` and `c`
Something like this
`GET /parents?max_results=10&embedded={"children":1}&projection={"children.a":1,"children.c":1}`
Please let me know if there is planned support for this and/or if it is at all possible as my children objects contain a lot of fields and I only require a very small subset of them when I query for the parents, so it will help us greatly speed up the response time and improve performance dramatically on the client-side.
| closed | 2015-03-19T22:38:04Z | 2018-05-18T16:19:35Z | https://github.com/pyeve/eve/issues/578 | [
"feature request",
"stale"
] | doshprompt | 12 |
pydata/xarray | pandas | 9,854 | Add FAQ answer about API stability / backwards compatibility? | ### What is your issue?
We try pretty hard to maintain backwards compatibility in xarray, and have informative deprecation cycles before any breaking changes. But this feature of the library isn't super-well advertised in the docs. The only places I can find it mentioned are deep in the [contributing guide](https://docs.xarray.dev/en/stable/contributing.html#backwards-compatibility) and the [FAQ question](https://docs.xarray.dev/en/stable/getting-started-guide/faq.html#what-parts-of-xarray-are-considered-public-api) about what's _not_ public and stable API.
I want to add another FAQ question that makes an explicit promise, something like:
> ### How stable is Xarray's API?
>
> Xarray tries very hard to maintain backwards compatibility between released versions. Whilst we do occasionally make breaking changes in order to improve the library, we try to [signpost changes](https://docs.xarray.dev/en/stable/contributing.html#backwards-compatibility) with `DeprecationWarnings` for many (6+?) months in advance. (An exception is bugfixes - which we try to fix as soon as we notice them.) Our [test-driven development practices](https://docs.xarray.dev/en/stable/contributing.html#test-driven-development-code-writing) help to ensure any accidental regressions are caught. This philosophy applies to everything in the [public API](https://docs.xarray.dev/en/stable/getting-started-guide/faq.html#what-parts-of-xarray-are-considered-public-api).
That is my understanding of what we already do, but I think it's useful for it to be in writing.
cc @shoyer let me know if you think this is too strong / weak a promise to make explicitly | closed | 2024-12-04T17:56:32Z | 2025-01-30T17:34:37Z | https://github.com/pydata/xarray/issues/9854 | [
"topic-documentation"
] | TomNicholas | 0 |
sherlock-project/sherlock | python | 1,824 | Mine | Here is a great file viewing app for Android. https://play.google.com/store/apps/details?id=com.sharpened.androidfileviewer | closed | 2023-06-27T10:52:48Z | 2023-08-29T12:36:31Z | https://github.com/sherlock-project/sherlock/issues/1824 | [] | Kitchenboy77 | 0 |
uriyyo/fastapi-pagination | fastapi | 1,286 | get_body_field() got an unexpected keyword argument 'dependant' | Not certain what's happening, but upgrading to pytest 8.3.3 from pytest 8.2.2 leads to this error, perhaps because of sub dependencies? Feel free to close the issue if there's no good way to investigate this.
```
INFO sqlalchemy.engine.Engine BEGIN (implicit)
INFO sqlalchemy.engine.Engine PRAGMA main.table_info("execution")
INFO sqlalchemy.engine.Engine [raw sql] ()
INFO sqlalchemy.engine.Engine COMMIT
ImportError while loading conftest 'conftest.py'.
conftest.py:7: in <module>
from main import app
main.py:33: in <module>
app = get_application(settings)
main.py:23: in get_application
add_pagination(application)
lib/python3.11/site-packages/fastapi_pagination/api.py:390: in add_pagination
_add_pagination(parent)
lib/python3.11/site-packages/fastapi_pagination/api.py:380: in _add_pagination
_update_route(route)
lib/python3.11/site-packages/fastapi_pagination/api.py:364: in _update_route
route.body_field = get_body_field(dependant=route.dependant, name=route.unique_id)
E TypeError: get_body_field() got an unexpected keyword argument 'dependant'
``` | closed | 2024-09-16T18:44:22Z | 2024-09-17T14:28:39Z | https://github.com/uriyyo/fastapi-pagination/issues/1286 | [
"bug"
] | zromick | 1 |
hootnot/oanda-api-v20 | rest-api | 144 | factory InstrumentsCandlesFactory Invalid value specified for 'to'. Time is in the future | Hi Mr Hootnot.
Not sure if this is an issue with the factory or Oanda have changed their behaviour in practice v20.
Seems that you can no longer get the last daily candle. Once upon a time the last candle would be returned for current day and then oanda would have an attribute in the json that showed the candle was complete or not.
I think they have removed this behaviour and you can no longer get the open candle.
So assuming I call the below on the 21st June 2019 the following happens.
```python
IINFO:oandapyV20.oandapyV20:performing request https://api-fxpractice.oanda.com/v3/instruments/EUR_USD/candles
DEBUG:urllib3.connectionpool:https://api-fxpractice.oanda.com:443 "GET /v3/instruments/EUR_USD/candles?granularity=D&includeFirst=True&from=2019-06-20T00%3A00%3A00Z&to=2019-06-21T08%3A47%3A51Z HTTP/1.1" 400 74
ERROR:oandapyV20.oandapyV20:request https://api-fxpractice.oanda.com/v3/instruments/EUR_USD/candles failed [400,{"errorMessage":"Invalid value specified for 'to'. Time is in the future"}]
```
```python
params = {
'granularity': 'D',
'from': '2017-01-01T00:00:00Z',
'count': 100
}
for r in InstrumentsCandlesFactory(instrument='EUR_USD', params=params):
res = api.request(r)
```
I can make it go away / workaround if I set the 'to' to today -1 to avoid the future call.
```python
params = {
'granularity': 'D',
'from': '2017-01-01T00:00:00Z',
'to': '2019-06-20T00:00:00Z',
'count': 100
}
``` | closed | 2019-06-21T08:53:22Z | 2021-05-21T12:31:46Z | https://github.com/hootnot/oanda-api-v20/issues/144 | [] | svenissimo | 10 |
ClimbsRocks/auto_ml | scikit-learn | 120 | run DataFrameVectorizer before parallelization | right now it's quite memory inefficient- we essentially end up holding the entire thing in memory twice.
and it's somewhat computationally expensive (not hugely so, but noticeable). and it's basically doing the same thing each time.
so rather than doing this 8 times in parallel (essentially holding 16x our data in memory), run it once, before we dispatch data.
this will also give us more incentive to parallelize DataFrameVectorizer, since it will now be a single-threaded blocking operation, rather than nested inside an already-parallelized operation.
The issues we run into are particularly around subpredictors. For each subpredictor, there has to be some logic for which columns to keep, and which to ignore. And this is going to be different for each subpredictor.
However, I think we can handle that even after vectorization.
this also means we'll have to refactor how we do feature selection.
effectively, we'll have to build out our own feature selection module that sits after DFVectorizer.
our custom feature selection module will have to:
1. ignore any columns in the vectorized data that we should not know about (which i think will just be all the subpredictor y val columns).
2. ignore any values that have not been chosen by feature selection for that particular subpredictor.
to accomplish #1, it the feature selection module must have knowledge of the column_descriptions object, and the dfv.vocabulary_ dict. From there it's some pretty straightforward logic.
2 is just re-implementing scikit-learn's feature selection logic.
This also has all kinds of effects on the rest of the pipeline.
for example, when we get feature names for analytics, we must get them from the new feature selection module.
we will also have to not restrict DataFrameVectorizer, like we currently are. instead, we'll leave the logic for removing vals in our new custom feature selection module.
this means we can remove the somewhat hacky duplicate code in _construct_pipeline, which will be much cleaner.
We may have to do something differently in our column naming around subpredictors. we might be ok, but it's worth looking into.
| closed | 2016-10-18T16:32:10Z | 2017-03-12T01:08:21Z | https://github.com/ClimbsRocks/auto_ml/issues/120 | [] | ClimbsRocks | 5 |
deepset-ai/haystack | machine-learning | 8,798 | Expand the functionality of the `DocumentCleaner` | **Is your feature request related to a problem? Please describe.**
We've found in practice that cleaning up files before being used in RAG pipelines does increase overall performance. For example, this Haystack [user](https://github.com/deepset-ai/haystack/issues/8761#issuecomment-2609529890) found the same.
We do have a `DocumentCleaner` to help with this process, but we found there are some options missing for the type of cleaning we would like to accomplish.
**Describe the solution you'd like**
The options I'd like to add to the `DocumentCleaner` are:
- an option that just runs `.strip()` on the content of every document. Often times we just want to remove the extra leading and trailing white space, but leave the white space within a chunk alone. For example, in mark down files the extra newlines can matter for formatting.
- also an option to provide a regex pattern to remove **and** a string to replace that regex match with. We currently have a few regex replaces in the `DocumentCleaner` and have the `remove_regex` parameter, but we don't have a way to customize what string should be used to replace the regex match. For example, one scenario that I'd like to do is replace all double newline characters `\n\n` with a single newline character `\n`.
**Describe alternatives you've considered**
We can create a custom component do perform these operations instead.
| open | 2025-02-03T12:19:59Z | 2025-03-14T14:28:23Z | https://github.com/deepset-ai/haystack/issues/8798 | [
"type:feature",
"P2"
] | sjrl | 1 |
mljar/mercury | jupyter | 386 | call to websocket keeps pending in mercury development server | Hi,
I execute `mercury run` from a folder containing some notebook files. Mercury start without any error. It opens the browser listing the notebooks from the folder. I select a notebook and it opens. So far so good.
But now the right side stays grayed out and shows 3 dots indicating that it is loading. With the network tab open I can see that the call to webserver backend keeps pending and never returns. The hanging call originated from Provider.tsx:205 making a request to ws://127.0.0.1:8000/ws/client/1/b67c9541-15....
Firewall is disabled.
Any ideas to why this is and how to fix or further debug?
Thanks,
Robert
| closed | 2023-10-27T13:18:43Z | 2023-10-28T15:02:14Z | https://github.com/mljar/mercury/issues/386 | [] | robert-elles | 2 |
AirtestProject/Airtest | automation | 516 | airtestIDE的多设备测试入口在哪里 | @yimelia
我在教程上看到airtestIDE可以进行批量测试,教程界面截图如下:

但是在我的airtestIDE上找不到对应入口,我的airtestIDE版本截图和界面截图如下:


| open | 2019-09-04T03:57:22Z | 2019-09-04T08:36:40Z | https://github.com/AirtestProject/Airtest/issues/516 | [
"to be released"
] | ymdhtt | 1 |
pytest-dev/pytest-cov | pytest | 13 | Support --fail-under | coverage 3.6 introduced a [`--fail-under` parameter to exit non-zero if coverage is below a certain threshold](http://nedbatchelder.com/code/coverage/cmd.html#cmd-reporting). It'd be great if pytest-cov also supported this.
| closed | 2014-06-23T15:37:58Z | 2014-11-26T17:08:01Z | https://github.com/pytest-dev/pytest-cov/issues/13 | [
"help wanted"
] | rouge8 | 10 |
nvbn/thefuck | python | 690 | Using vim command success and type fuck cause display error | I an using SSH(tty) to connect to my pc.

| open | 2017-09-06T02:08:05Z | 2017-09-06T02:08:05Z | https://github.com/nvbn/thefuck/issues/690 | [] | btstw | 0 |
autokey/autokey | automation | 421 | Phrase with <CTRL> modifier does not work with i3wm | ## Classification:
Bug
## Version
#### AutoKey version:
Used GUI Gtk
Installed via: AUR
#### Linux Distribution:
Manjaro + i3wm
## Steps to Reproduce (if applicable)
Install autokey and use i3wm. Set the phrase `<CTRL>+j` to send `<down>`.
## Expected Results
The application will receive `<down>` only
## Actual Results
The application will receive `<CTRL>+<down>`. This is not the case with Gnome. | open | 2020-05-24T20:17:42Z | 2020-05-31T19:40:28Z | https://github.com/autokey/autokey/issues/421 | [] | pietrodito | 3 |
MagicStack/asyncpg | asyncio | 617 | set_type_codec() appears to assume a particular set_type_codec for the "char" datatype |
* **asyncpg version**: 0.21.0
* **PostgreSQL version**: 11.8 fedora
* **Do you use a PostgreSQL SaaS? If so, which? Can you reproduce
the issue with a local PostgreSQL install?**: N/A
* **Python version**: 3.8.3
* **Platform**: Fedora 31
* **Do you use pgbouncer?**: no
* **Did you install asyncpg with pip?**: yes
* **If you built asyncpg locally, which version of Cython did you use?**: N/A
* **Can the issue be reproduced under both asyncio and
[uvloop](https://github.com/magicstack/uvloop)?**: N/A
It appears that the implementation for set_type_codec() relies upon the results of the query [TYPE_BY_NAME](https://github.com/MagicStack/asyncpg/blob/2bac166c1ba098b9ebdfca3dc5b8264ae850213c/asyncpg/introspection.py#L137) which itself is assumed to return a bytes value from the PostgreSQL "char" datatype.
I was previously unaware that PostgreSQL actually has two "char" variants bpchar and char, and in the documentation at https://magicstack.github.io/asyncpg/current/usage.html#type-conversion this is talking about the "bpchar" datatype. that's fine. However, when trying to normalize asyncpg's behavior against that of the psycopg2 and pg8000 drivers, both of which will give you back string for both of these types (we have determined this is also a bug in those drivers, as they fail to return arbirary bytes for such a datatype and likely was missed when they migrated to Python 3), I tried setting up a type_codec for "char" that would allow it to return strings:
await conn.set_type_codec(
"char",
schema="pg_catalog",
encoder=lambda value: value,
decoder=lambda value: value,
format="text",
)
that works, but when you do that, you no longer can use the ``set_type_codec`` method for anything else, because the behavior of the type is redefined outside of the assumptions made by [is_scalar_type](https://github.com/MagicStack/asyncpg/blob/2bac166c1ba098b9ebdfca3dc5b8264ae850213c/asyncpg/introspection.py#L154).
The example program below illustrates this failure when attempting to subsequently set up a codec for the JSONB datatype:
```
import asyncio
import json
import asyncpg
async def main(illustrate_bug):
conn = await asyncpg.connect(
user="scott", password="tiger", database="test"
)
if illustrate_bug:
await conn.set_type_codec(
"char",
schema="pg_catalog",
encoder=lambda value: value,
decoder=lambda value: value,
format="text",
)
await conn.set_type_codec(
"jsonb",
schema="pg_catalog",
encoder=lambda value: value,
decoder=json.loads,
format="text",
)
print("no bug")
asyncio.run(main(False))
print("bug")
asyncio.run(main(True))
```
output:
```
no bug
bug
Traceback (most recent call last):
File "test3.py", line 35, in <module>
asyncio.run(main(True))
File "/opt/python-3.8.3/lib/python3.8/asyncio/runners.py", line 43, in run
return loop.run_until_complete(main)
File "/opt/python-3.8.3/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "test3.py", line 21, in main
await conn.set_type_codec(
File "/home/classic/.venv3/lib/python3.8/site-packages/asyncpg/connection.py", line 991, in set_type_codec
raise ValueError(
ValueError: cannot use custom codec on non-scalar type pg_catalog.jsonb
```
Since the "char" datatype is kind of an obscure construct, it's likely reasonable that asyncpg disallow setting up a type codec for this particular type, or perhaps it could emit a warning, but at the moment there doesn't seem to be documentation suggesting there are limitations on what kinds of type codecs can be constructed.
none of this is blocking us, just something we came across and I hope it's helpful to the asyncpg project. cc @fantix
| closed | 2020-09-16T17:07:05Z | 2020-09-25T01:51:29Z | https://github.com/MagicStack/asyncpg/issues/617 | [] | zzzeek | 5 |
LibreTranslate/LibreTranslate | api | 476 | Errorneous file translation -- text translation box works. | 1. Goto the website: <https://libretranslate.com>
2. Paste the following in the "Translate from Spanish" text box:
> Recuerde que "con éxito completando" significa que ha leído el módulo antes del grupo, ha completado todos los Escribió ejercicios antes del grupo y luego compartió tus respuestas con el grupo de acuerdo con las instrucciones del facilitador.
3. Observe that the "Translate into English" text box now reads:
> Remember that "successfully completing" means you've read the module before the group, ***completed all of them*** before the group and then shared your responses with the group according to the facilitator's instructions.
4. Click "TRANSLATE FILES".
5. Upload the exact same Spanish text, in a UTF-8 (no bom) .txt file, and download the translated file.
6. Observe that the translated file reads:
> Remember that "successfully completing" means that you have read the module before the group, ***have completed all the years*** before the group and then shared your answers with the group according to the facilitator's instructions.
---
Not only is the file's translation _different_ but it's also significantly inaccurate for the ***emphasized*** portion.
Shouldn't a plain text file translation match that of the text box, in this case?
| open | 2023-08-03T22:23:00Z | 2023-11-09T03:11:11Z | https://github.com/LibreTranslate/LibreTranslate/issues/476 | [
"enhancement"
] | veganaize | 1 |
MycroftAI/mycroft-core | nlp | 2,238 | Installing Mycroft may uninstall WINE without notice | A [user on the forums reported](https://community.mycroft.ai/t/there-should-be-a-warning-that-mycroft-will-uninstall-wine-and-other-software/6995/2) that by installing Mycroft, their WINE installation and all programs using WINE were uninstalled without warning. Have requested further details about the users system.
This appears to have happened at least once before on Kubuntu 16.04
https://community.mycroft.ai/t/finally-got-it-working/5580
In that instance it seemed that the `portaudio19-dev` package required `libjack0` and thus uninstalled `lib-jack-2-0`.
#### Expected behaviour:
- [ ] Able to install Mycroft without removing other software from the system.
- [ ] If a package must be removed from the system this should provide a warning and require user confirmation with the option to abort the installation process and return the system to its previous state. | closed | 2019-07-29T02:27:39Z | 2021-08-04T21:22:45Z | https://github.com/MycroftAI/mycroft-core/issues/2238 | [
"Type: Bug - complex"
] | krisgesling | 9 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,623 | How to use test.py to test both directions for cyclegan? | I ran test.py but it is only giving results from A to B, but I want the results from B to A also. Putting `--direction BtoA` only switches the input but not the model. Thank you | closed | 2024-02-06T16:13:44Z | 2024-02-14T18:48:48Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1623 | [] | lamwilton | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 359 | Runtime error when using ArcFace without a miner | I tried to use ArcFace loss without a miner (empty dictionary) in the [TwoStreamMetricLoss.ipynb](https://github.com/KevinMusgrave/pytorch-metric-learning/blob/master/examples/notebooks/TwoStreamMetricLoss.ipynb) from examples on collab, but it fails with the following runtime error:
```
RuntimeError Traceback (most recent call last)
<ipython-input-25-565c85f60968> in <module>()
1 # In the embeddings plots, the small dots represent the 1st stream, and the larger dots represent the 2nd stream
----> 2 trainer.train(num_epochs=num_epochs)
7 frames
/usr/local/lib/python3.7/dist-packages/pytorch_metric_learning/trainers/base_trainer.py in train(self, start_epoch, num_epochs)
85 pbar = tqdm.tqdm(range(self.iterations_per_epoch))
86 for self.iteration in pbar:
---> 87 self.forward_and_backward()
88 self.end_of_iteration_hook(self)
89 pbar.set_description("total_loss=%.5f" % self.losses["total_loss"])
/usr/local/lib/python3.7/dist-packages/pytorch_metric_learning/trainers/base_trainer.py in forward_and_backward(self)
113 self.zero_grad()
114 self.update_loss_weights()
--> 115 self.calculate_loss(self.get_batch())
116 self.loss_tracker.update(self.loss_weights)
117 self.backward()
/usr/local/lib/python3.7/dist-packages/pytorch_metric_learning/trainers/twostream_metric_loss.py in calculate_loss(self, curr_batch)
16 indices_tuple = self.maybe_mine_embeddings(embeddings, labels)
17 self.losses["metric_loss"] = self.maybe_get_metric_loss(
---> 18 embeddings, labels, indices_tuple
19 )
20
/usr/local/lib/python3.7/dist-packages/pytorch_metric_learning/trainers/twostream_metric_loss.py in maybe_get_metric_loss(self, embeddings, labels, indices_tuple)
37 all_embeddings = torch.cat(embeddings, dim=0)
38 return self.loss_funcs["metric_loss"](
---> 39 all_embeddings, all_labels, indices_tuple
40 )
41 return 0
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs)
1049 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1050 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1051 return forward_call(*input, **kwargs)
1052 # Do not call functions when jit is used
1053 full_backward_hooks, non_full_backward_hooks = [], []
/usr/local/lib/python3.7/dist-packages/pytorch_metric_learning/losses/base_metric_loss_function.py in forward(self, embeddings, labels, indices_tuple)
32 c_f.check_shapes(embeddings, labels)
33 labels = c_f.to_device(labels, embeddings)
---> 34 loss_dict = self.compute_loss(embeddings, labels, indices_tuple)
35 self.add_embedding_regularization_to_loss_dict(loss_dict, embeddings)
36 return self.reducer(loss_dict, embeddings, labels)
/usr/local/lib/python3.7/dist-packages/pytorch_metric_learning/losses/large_margin_softmax_loss.py in compute_loss(self, embeddings, labels, indices_tuple)
102 dtype, device = embeddings.dtype, embeddings.device
103 self.cast_types(dtype, device)
--> 104 miner_weights = lmu.convert_to_weights(indices_tuple, labels, dtype=dtype)
105 mask = self.get_target_mask(embeddings, labels)
106 cosine = self.get_cosine(embeddings)
/usr/local/lib/python3.7/dist-packages/pytorch_metric_learning/utils/loss_and_miner_utils.py in convert_to_weights(indices_tuple, labels, dtype)
208 indices, counts = torch.unique(torch.cat(indices_tuple, dim=0), return_counts=True)
209 counts = c_f.to_dtype(counts, dtype=dtype) / torch.sum(counts)
--> 210 weights[indices] = counts / torch.max(counts)
211 return weights
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
```
It seems the labels are on the GPU, while indices_tuple is on the CPU. I'm not sure if it's a bug or I missed something.
Any help is appreciated. :) | closed | 2021-07-30T18:20:09Z | 2021-11-28T19:20:35Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/359 | [
"bug",
"fixed in dev branch"
] | gkouros | 2 |
automl/auto-sklearn | scikit-learn | 1,593 | Using recal with autosklearn 2 raised issue that `askl2_training_data.json` is not available | open | 2022-10-10T13:24:50Z | 2022-11-08T17:13:57Z | https://github.com/automl/auto-sklearn/issues/1593 | [] | eddiebergman | 1 | |
deepspeedai/DeepSpeed | pytorch | 6,687 | nv-nightly CI test failure | The Nightly CI for https://github.com/microsoft/DeepSpeed/actions/runs/11584608559 failed.
| closed | 2024-10-30T01:09:53Z | 2024-10-31T17:34:39Z | https://github.com/deepspeedai/DeepSpeed/issues/6687 | [
"ci-failure"
] | github-actions[bot] | 1 |
sinaptik-ai/pandas-ai | pandas | 1,310 | Is that a must to use the Docker component if I am using the agent function? | ### System Info
What's the need of the docker if i am not relying on the front hand? / doesn't need the front end.
### 🐛 Describe the bug
I am using my openAI model is there a need to spin up the docker? | closed | 2024-08-05T06:46:00Z | 2024-11-11T16:04:26Z | https://github.com/sinaptik-ai/pandas-ai/issues/1310 | [] | rogerlpag | 5 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 667 | [FEATURE]: Better job blacklisting (Title, Location) | ### Feature summary
Make title/location blacklisting better with better string matching or make it gpt powered
### Feature description
The current blacklisting is direct matching with the configuration file. This introduces multiple false positives.
Examples:
"Brazil" is blacklisted, but "Rio de Janeiro, Brazil", is whitelisted as false positive
"Data Engineer" is blacklisted ,but "Data Engineer(Gen AI)" is whitelisted as false positive
Solution: introduce better-matching algo or make blacklisting GPT powered
### Motivation
Better blacklisting makes the applied jobs more relevant
### Alternatives considered
Instead of direct string matching, split it first, then check.
Long-term solution: make a comprehensive string parsing which handles all non-text characters
Comprehensive (Not efficient) solution: Make a check with GPT
### Additional context
_No response_ | closed | 2024-10-29T18:48:17Z | 2024-11-07T00:46:48Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/667 | [
"enhancement"
] | Jasar-k | 2 |
databricks/koalas | pandas | 1,456 | Add Spark JDBC read | For enterprise use, I'd like to poll the extension of read methods to JDBC, given that drivers are available in the Spark Context.
**Current Solutions**
```python
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
import databricks.koalas as ks
jdbc_options = dict()
jdbc_options["driver"] = "<my-driver>"
# JDBC
df = ks.DataFrame(
spark.read.format("jdbc").options(**jdbc_options).option("dbtable", "<sql>").load()
)
# Snowflake
sf_options = dict()
df = ks.DataFrame(
spark.read.format("net.snowflake.spark.snowflake")
.options(**sf_options)
.option("dbtable", "<sql>")
.load()
)
```
**New Solution**
```python
import databricks.koalas as ks
df = ks.read_jdbc(dbtable="<sql>", driver="<my-driver>", **options)
```
| closed | 2020-05-01T11:14:12Z | 2020-12-10T17:41:54Z | https://github.com/databricks/koalas/issues/1456 | [] | sebastianvermaas | 6 |
aiortc/aioquic | asyncio | 501 | [SECURITY] Accepting and storing an unlimited number of CRYPTO frames within a single connection | - Aioquic may infinitely receive `CRYPTO` frames within a single connection, rapidly depleting memory and subsequently being forcefully closed by the operating system, leading to a denial of service attack.
- In line 1613 of `quic/connection.py`, the server only checks `offset + length < 2^62 - 1` when processing `CRYPTO` frames, and then stores their contents in `QuicConnection._crypto_streams[Epoch.ONE_RTT]` , resulting in memory consumption.
- To validate the effect, I simulated an attacker sending `CRYPTO` frames with an Offset set to 0x1000, but with a length of only 0x200 to prevent some memory merging operations. As shown in the graph, within the 90s of the attack occurrence, aioquic consumed 100GB of memory and will be killed soon by the operating system.

| closed | 2024-05-27T17:32:02Z | 2024-06-18T14:53:41Z | https://github.com/aiortc/aioquic/issues/501 | [] | k4ra5u | 1 |
python-gino/gino | sqlalchemy | 457 | attribute 'db' in aiohttp.py |
### Description
Hi! I am using gino and aiohttp extension and in my application there is already a db attribute. Can send attribute name in arguments?
### For example:
```
def init_app(self, app, config = None, attr = 'db'):
app[attr] = self
```
https://github.com/fantix/gino/blob/d50fb882fbf3adf38b04f70da0f7d71574768081/gino/ext/aiohttp.py#L118
What do you think?
Thank | closed | 2019-03-12T16:06:14Z | 2019-03-21T02:01:47Z | https://github.com/python-gino/gino/issues/457 | [
"help wanted",
"feature request"
] | EvgenyUsov | 2 |
huggingface/diffusers | pytorch | 10,416 | Euler flow matching scheduler is missing documentation for parameters | 
I think there are some undocumented parameters here. | closed | 2024-12-31T13:15:35Z | 2025-01-09T18:54:41Z | https://github.com/huggingface/diffusers/issues/10416 | [] | bghira | 4 |
onnx/onnx | pytorch | 5,957 | The error in the Installation of onnx |
# Ask a Question
When I install the cnocr with pip , there have a error in the installation of onnx,
## respose code:
[error in install.txt](https://github.com/onnx/onnx/files/14387640/error.in.install.txt)
### Further information
operating system : Windows 10 ltsc 1809;
cmake version: 3.29.0-rc1;
pip version: 24.0;
python version: 3.12.
### Notes
I have tried install in another computer with Windows 11, same cmake version , same pip , same python and same error.
| open | 2024-02-23T16:30:20Z | 2024-03-12T05:18:10Z | https://github.com/onnx/onnx/issues/5957 | [
"question"
] | wk19941015 | 1 |
plotly/dash-table | dash | 249 | Select all rows | I don't think its possible to select all rows in the table / filtered view.
Is this something that can be added?
Thanks! And thanks for all your work on the project - excited to see how it develops | open | 2018-11-20T19:31:24Z | 2022-07-11T13:15:06Z | https://github.com/plotly/dash-table/issues/249 | [
"dash-type-enhancement",
"size: 2"
] | pmajmudar | 13 |
flasgger/flasgger | api | 621 | Flasgger still showing outdated docs | I once created the documentation using flasgger, but after I modified the endpoints' swag_from documentation, the newly run flask app still shows outdated documentation. | open | 2024-06-22T23:05:52Z | 2024-06-22T23:05:52Z | https://github.com/flasgger/flasgger/issues/621 | [] | NaviteLogger | 0 |
pydata/xarray | numpy | 9,455 | `DataTree.to_zarr()` is very slow writing to high latency store | ### What is your issue?
Repost of https://github.com/xarray-contrib/datatree/issues/277, with some updates.
## Test case
Write a tree containing 13 nodes and negligible data to S3/GCS with fsspec:
```python
import numpy as np
import xarray as xr
ds = xr.Dataset(
data_vars={
"a": xr.DataArray(np.ones((2, 2)), coords={"x": [1, 2], "y": [1, 2]}),
"b": xr.DataArray(np.ones((2, 2)), coords={"x": [1, 2], "y": [1, 2]}),
"c": xr.DataArray(np.ones((2, 2)), coords={"x": [1, 2], "y": [1, 2]}),
}
)
dt = xr.core.datatree.DataTree()
for first_level in [1, 2, 3]:
dt[f"{first_level}"] = DataTree(ds)
for second_level in [1, 2, 3]:
dt[f"{first_level}/{second_level}"] = DataTree(ds)
%time dt.to_zarr("test.zarr", mode="w")
bucket = "s3|gs://your-bucket/path"
%time dt.to_zarr(f"{bucket}/test.zarr", mode="w")
```
Gives:
```
CPU times: user 287 ms, sys: 43.9 ms, total: 331 ms
Wall time: 331 ms
CPU times: user 3.22 s, sys: 219 ms, total: 3.44 s
Wall time: 1min 4s
```
This is a bit better than in the original issue due to improvements elsewhere in the stack, but still really slow for heavily nested but otherwise small datasets.
## Potential Improvements
#9014 did make some decent improvements to read speed. When reading the dataset written above I get:
```python
%timeit xr.backends.api.open_datatree(f"{bucket}/test.zarr", engine="zarr")
%timeit datatree.open_datatree(f"{bucket}/test.zarr", engine="zarr")
```
```
882 ms ± 47.3 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
3.47 s ± 86.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
```
We'll need similar optimizations on the write side. The fundamental issue is that `DataTree.to_zarr` relies on serial `Dataset.to_zarr` calls for each node:
https://github.com/pydata/xarray/blob/12c690f4bd72141798d7c3991a95abf88b5d76d3/xarray/core/datatree_io.py#L153-L171
This results in many `fsspec` calls to list dirs, check file existence, and put small metadata and attribute files in the bucket. Here's `snakeviz` on the example:

(The 8s block on the right is metadata consolidation)
## Workaround
If your data is small enough to dump locally, this works great:
```python
def to_zarr(dt, path):
with TemporaryDirectory() as tmp_path:
dt.to_zarr(tmp_path)
fs.put(tmp_path, path, recursive=True)
```
Takes about 1s.
| open | 2024-09-08T14:30:28Z | 2025-03-20T06:10:03Z | https://github.com/pydata/xarray/issues/9455 | [
"topic-backends",
"topic-performance",
"topic-zarr",
"topic-DataTree"
] | slevang | 3 |
Skyvern-AI/skyvern | api | 1,585 | Any effective way to change the value of a variable in the public docker image? | Is there any effective way to change the value of a variable in the public docker image when running docket locally? Thanks. | open | 2025-01-16T22:49:35Z | 2025-01-25T10:27:01Z | https://github.com/Skyvern-AI/skyvern/issues/1585 | [] | universe2jouney | 3 |
deezer/spleeter | deep-learning | 647 | [Discussion] use gpu in docker failed,can I use --gpus param? | this command work well
docker run --rm -v $(pwd):/output deezer/spleeter-gpu:3.8-2stems separate -o /output /output/3t.mp3
but these command failed
docker run --rm -v $(pwd):/output --gpus all deezer/spleeter-gpu:3.8-2stems separate -o /output /output/3t.mp3
docker run --rm -v $(pwd):/output --runtime=nvidia deezer/spleeter-gpu:3.8-2stems separate -o /output /output/3t.mp3
error is :
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1365, in _do_call
return fn(*args)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1349, in _run_fn
return self._call_tf_sessionrun(options, feed_dict, fetch_list,
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1441, in _call_tf_sessionrun
return tf_session.TF_SessionRun_wrapper(self._session, options, feed_dict,
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[51,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv2d_transpose_4/conv2d_transpose}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[strided_slice_23/_907]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[51,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node conv2d_transpose_4/conv2d_transpose}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/usr/local/lib/python3.8/site-packages/spleeter/__main__.py", line 256, in entrypoint
spleeter()
File "/usr/local/lib/python3.8/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/usr/local/lib/python3.8/site-packages/spleeter/__main__.py", line 128, in separate
separator.separate_to_file(
File "/usr/local/lib/python3.8/site-packages/spleeter/separator.py", line 382, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "/usr/local/lib/python3.8/site-packages/spleeter/separator.py", line 323, in separate
return self._separate_tensorflow(waveform, audio_descriptor)
File "/usr/local/lib/python3.8/site-packages/spleeter/separator.py", line 305, in _separate_tensorflow
prediction = next(prediction_generator)
File "/usr/local/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 631, in predict
preds_evaluated = mon_sess.run(predictions)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/training/monitored_session.py", line 774, in run
return self._sess.run(
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/training/monitored_session.py", line 1279, in run
return self._sess.run(
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/training/monitored_session.py", line 1384, in run
raise six.reraise(*original_exc_info)
File "/usr/local/lib/python3.8/site-packages/six.py", line 703, in reraise
raise value
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/training/monitored_session.py", line 1369, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/training/monitored_session.py", line 1437, in run
outputs = _WrappedSession.run(
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/training/monitored_session.py", line 1200, in run
return self._sess.run(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 957, in run
result = self._run(None, fetches, feed_dict, options_ptr,
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1180, in _run
results = self._do_run(handle, final_targets, final_fetches,
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1358, in _do_run
return self._do_call(_run_fn, feeds, fetches, targets, options,
File "/usr/local/lib/python3.8/site-packages/tensorflow/python/client/session.py", line 1384, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.ResourceExhaustedError: 2 root error(s) found.
(0) Resource exhausted: OOM when allocating tensor with shape[51,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node conv2d_transpose_4/conv2d_transpose (defined at /lib/python3.8/site-packages/spleeter/model/functions/unet.py:164) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[[strided_slice_23/_907]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
(1) Resource exhausted: OOM when allocating tensor with shape[51,16,256,512] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node conv2d_transpose_4/conv2d_transpose (defined at /lib/python3.8/site-packages/spleeter/model/functions/unet.py:164) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
0 successful operations.
0 derived errors ignored.
Errors may have originated from an input operation.
Input Source operations connected to node conv2d_transpose_4/conv2d_transpose:
concatenate_3/concat (defined at /lib/python3.8/site-packages/spleeter/model/functions/unet.py:162)
Input Source operations connected to node conv2d_transpose_4/conv2d_transpose:
concatenate_3/concat (defined at /lib/python3.8/site-packages/spleeter/model/functions/unet.py:162)
Original stack trace for 'conv2d_transpose_4/conv2d_transpose':
File "/bin/spleeter", line 8, in <module>
sys.exit(entrypoint())
File "/lib/python3.8/site-packages/spleeter/__main__.py", line 256, in entrypoint
spleeter()
File "/lib/python3.8/site-packages/typer/main.py", line 214, in __call__
return get_command(self)(*args, **kwargs)
File "/lib/python3.8/site-packages/click/core.py", line 829, in __call__
return self.main(*args, **kwargs)
File "/lib/python3.8/site-packages/click/core.py", line 782, in main
rv = self.invoke(ctx)
File "/lib/python3.8/site-packages/click/core.py", line 1259, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "/lib/python3.8/site-packages/click/core.py", line 1066, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/lib/python3.8/site-packages/click/core.py", line 610, in invoke
return callback(*args, **kwargs)
File "/lib/python3.8/site-packages/typer/main.py", line 497, in wrapper
return callback(**use_params) # type: ignore
File "/lib/python3.8/site-packages/spleeter/__main__.py", line 128, in separate
separator.separate_to_file(
File "/lib/python3.8/site-packages/spleeter/separator.py", line 382, in separate_to_file
sources = self.separate(waveform, audio_descriptor)
File "/lib/python3.8/site-packages/spleeter/separator.py", line 323, in separate
return self._separate_tensorflow(waveform, audio_descriptor)
File "/lib/python3.8/site-packages/spleeter/separator.py", line 305, in _separate_tensorflow
prediction = next(prediction_generator)
File "/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 612, in predict
estimator_spec = self._call_model_fn(features, None, ModeKeys.PREDICT,
File "/lib/python3.8/site-packages/tensorflow_estimator/python/estimator/estimator.py", line 1163, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 568, in model_fn
return builder.build_predict_model()
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 516, in build_predict_model
tf.estimator.ModeKeys.PREDICT, predictions=self.outputs
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 318, in outputs
self._build_outputs()
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 499, in _build_outputs
self._outputs = self._build_output_waveform(self.masked_stfts)
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 342, in masked_stfts
self._build_masked_stfts()
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 465, in _build_masked_stfts
for instrument, mask in self.masks.items():
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 336, in masks
self._build_masks()
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 432, in _build_masks
output_dict = self.model_outputs
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 312, in model_outputs
self._build_model_outputs()
File "/lib/python3.8/site-packages/spleeter/model/__init__.py", line 211, in _build_model_outputs
self._model_outputs = apply_model(
File "/lib/python3.8/site-packages/spleeter/model/functions/unet.py", line 197, in unet
return apply(apply_unet, input_tensor, instruments, params)
File "/lib/python3.8/site-packages/spleeter/model/functions/__init__.py", line 44, in apply
output_dict[out_name] = function(
File "/lib/python3.8/site-packages/spleeter/model/functions/unet.py", line 164, in apply_unet
up5 = conv2d_transpose_factory(conv_n_filters[0], (5, 5))((merge4))
File "/lib/python3.8/site-packages/tensorflow/python/keras/engine/base_layer_v1.py", line 776, in __call__
outputs = call_fn(cast_inputs, *args, **kwargs)
File "/lib/python3.8/site-packages/tensorflow/python/keras/layers/convolutional.py", line 1291, in call
outputs = backend.conv2d_transpose(
File "/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/lib/python3.8/site-packages/tensorflow/python/keras/backend.py", line 5177, in conv2d_transpose
x = nn.conv2d_transpose(x, kernel, output_shape, strides,
File "/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/lib/python3.8/site-packages/tensorflow/python/ops/nn_ops.py", line 2482, in conv2d_transpose
return conv2d_transpose_v2(
File "/lib/python3.8/site-packages/tensorflow/python/util/dispatch.py", line 201, in wrapper
return target(*args, **kwargs)
File "/lib/python3.8/site-packages/tensorflow/python/ops/nn_ops.py", line 2560, in conv2d_transpose_v2
return gen_nn_ops.conv2d_backprop_input(
File "/lib/python3.8/site-packages/tensorflow/python/ops/gen_nn_ops.py", line 1293, in conv2d_backprop_input
_, _, _op, _outputs = _op_def_library._apply_op_helper(
File "/lib/python3.8/site-packages/tensorflow/python/framework/op_def_library.py", line 742, in _apply_op_helper
op = g._create_op_internal(op_type_name, inputs, dtypes=None,
File "/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 3477, in _create_op_internal
ret = Operation(
File "/lib/python3.8/site-packages/tensorflow/python/framework/ops.py", line 1949, in __init__
self._traceback = tf_stack.extract_stack()
my system:
system:Ubuntu 18.04.5 LTS
cuda:NVIDIA-SMI 460.80 Driver Version: 460.80 CUDA Version: 11.2
| closed | 2021-08-08T08:05:13Z | 2021-08-29T02:03:50Z | https://github.com/deezer/spleeter/issues/647 | [
"question"
] | m986883511 | 3 |
deezer/spleeter | tensorflow | 259 | [Discussion] someone is using spleeter for commercial use. | <!-- Please respect the title [Discussion] tag. -->
Someone is using this model for commercial use. Is it ok?
Here is the link
https://dango.ai/
| closed | 2020-02-05T06:19:47Z | 2020-02-08T13:56:09Z | https://github.com/deezer/spleeter/issues/259 | [
"question"
] | blue-sky-2020 | 1 |
graphql-python/graphene-django | django | 594 | Building mutations from scratch - django form or DRF better? | I have a project from total stratch. In terms of simplicity which will be easier?
| closed | 2019-03-12T00:28:31Z | 2019-07-01T17:20:29Z | https://github.com/graphql-python/graphene-django/issues/594 | [
"question",
"wontfix",
"Docs enhancement"
] | gotexis | 5 |
matplotlib/matplotlib | matplotlib | 29,259 | [Bug]: No module named pyplot | ### Bug summary
No module named pyplot
### Code for reproduction
```Python
import matplotlib.pyplot as plt
import numpy as np
from scipy.integrate import solve_ivp
# Parámetros
miumax = 0.65
ks = 12.8
a = 1.12
b = 1
Sm = 98.3
Pm = 65.2
Yxs = 0.067
m = 0.230
alfa = 7.3
beta = 0 # 0.15
si = 54.45
xi = 0.05
pi = 0
vi = 1.5
Vf = 4
Flux = 0.4
s0 = 180
tmax = 47 # horas
smin = 20
# Variables iniciales
var_ini = [xi, si, pi, vi] # x, s, p, v
# Perfil del flujo
F_perfil = []
# Definir funciones del modelo
def batch(t, var):
x, s, p, v = var
mui = miumax * (s / (s + ks)) * (1 - (s / Sm)**a) * (1 - (p / Pm)**b)
muis = (1 / Yxs) * mui + m
miup = alfa * mui + beta
dxdt = mui * x
dsdt = - muis * x
dpdt = miup * x
dvdt = 0
return [dxdt, dsdt, dpdt, dvdt]
def batchali(t, var, F):
x, s, p, v = var
mui = miumax * (s / (s + ks)) * (1 - (s / Sm)**a) * (1 - (p / Pm)**b)
muis = (1 / Yxs) * mui + m
miup = alfa * mui + beta
dxdt = x * (mui - (F / v))
dsdt = (F / v) * (s0 - s) - muis * x
dpdt = miup * x - (F / v) * p
dvdt = F
return [dxdt, dsdt, dpdt, dvdt]
# Función dinámica con control de flujo
def switch(t, var):
x, s, p, v = var
# Bucle simulado para controlar flujo y detener el sistema
while v <= Vf:
if s <= smin: # Si s >= si, apagar flujo
F = Flux
F_perfil.append((t, F)) # Registrar flujo
return batchali(t, var, F)
else: # Caso intermedio
F = 0
F_perfil.append((t, F)) # Registrar flujo
return batch(t, var)
if s >= si: # Si s <= smin, activar flujo
F = 0
F_perfil.append((t, F)) # Registrar flujo
return batch(t, var)
else: # Caso intermedio
F = Flux
F_perfil.append((t, F)) # Registrar flujo
return batchali(t, var, F)
# Resolver el sistema
t_span = (0, tmax)
t_eval = np.linspace(0, tmax, 500)
sol = solve_ivp(switch, t_span, var_ini, t_eval=t_eval, method='RK45')
# Convertir el perfil de flujo en arreglos para graficar
F_perfil = np.array(F_perfil)
F_time = F_perfil[:, 0]
F_values = F_perfil[:, 1]
# Graficar resultados
labels = ['x (Biomasa)', 's (Sustrato)', 'p (Producto)', 'v (Volumen)']
plt.figure(figsize=(10, 6))
for i in range(3):
plt.plot(sol.t, sol.y[i], label=labels[i])
plt.xlabel('Tiempo (h)')
plt.ylabel('Concentraciones y Volumen')
plt.title('Simulación dinámica del sistema biológico')
plt.legend()
plt.grid()
plt.show()
# Graficar el perfil de flujo
plt.figure(figsize=(10, 4))
plt.plot(F_time, F_values, color='purple', label='Flujo (F)')
plt.xlabel('Tiempo (h)')
plt.ylabel('Flujo (L/h)')
plt.title('Perfil del flujo de alimentación')
plt.legend()
plt.grid()
plt.show()
plt.plot(sol.t, sol.y[3], label='v')
plt.xlabel('Tiempo')
plt.ylabel('volumen')
plt.title('volumen vs t')
plt.legend()
plt.grid()
plt.show()
```
### Actual outcome
No run code is available. Compilation error.
### Expected outcome
It should run with no error messages in compilation time.
### Additional information
_No response_
### Operating system
Windows 11
### Matplotlib Version
3.9.3
### Matplotlib Backend
module://backend_interagg
### Python version
Python 3.11.0
### Jupyter version
Not using this environment (Pycharm is used instead)
### Installation
pip | closed | 2024-12-08T21:01:23Z | 2024-12-09T16:15:33Z | https://github.com/matplotlib/matplotlib/issues/29259 | [
"Community support"
] | abelardogit | 3 |
rthalley/dnspython | asyncio | 738 | wrong answer returned | ```
resolver = dns.resolver.Resolver()
dnsreq = resolver.resolve(host, rdtype=dns.rdatatype.A, search=True)
sockset = set()
addrinfos = dnsreq.response.answer
for item in addrinfos:
for j in item:
print("==>",j, host)
ip = j.address
```
when I use multthread dealwith hosts,
the result remined 'CNAME' object has no attribute 'address'
| closed | 2021-12-16T07:50:00Z | 2021-12-16T13:28:38Z | https://github.com/rthalley/dnspython/issues/738 | [] | promlife | 1 |
explosion/spaCy | data-science | 13,154 | MemoryError: Unable to allocate 29.7 GiB for an array with shape (86399, 4, 4, 2880, 2) and data type float32 | ### Discussed in https://github.com/explosion/spaCy/discussions/13153
<div type='discussions-op-text'>
<sup>Originally posted by **nunu346** November 27, 2023</sup>
import xarray as xr
netcdf_file_in =r'C:\Users\Mg\Desktop\ops_exis-l1b-sfxr_g16_d20210601_v0-0-0.nc'
csv_file_out = r'C:\Users\Mg\Desktop\ops_exis-l1b-sfxr_g16_d20210601_v0-0-0.csv'
ds = xr.open_dataset(netcdf_file_in)
df = ds.to_dataframe()
df.to_csv(csv_file_out)
ds.close()
print(f"Conversion from NetCDF to CSV complete. CSV file saved at: {csv_file_out}")
error
---------------------------------------------------------------------------
MemoryError Traceback (most recent call last)
Cell In[6], line 13
10 ds = xr.open_dataset(netcdf_file_in)
12 # Convert the dataset to a pandas DataFrame
---> 13 df = ds.to_dataframe()
15 # Save the DataFrame to a CSV file
16 df.to_csv(csv_file_out)
File ~\anaconda3\Lib\site-packages\xarray\core\dataset.py:6289, in Dataset.to_dataframe(self, dim_order)
6261 """Convert this dataset into a pandas.DataFrame.
6262
6263 Non-index variables in this dataset form the columns of the
(...)
6284
6285 """
6287 ordered_dims = self._normalize_dim_order(dim_order=dim_order)
-> 6289 return self._to_dataframe(ordered_dims=ordered_dims)
File ~\anaconda3\Lib\site-packages\xarray\core\dataset.py:6253, in Dataset._to_dataframe(self, ordered_dims)
6251 def _to_dataframe(self, ordered_dims: Mapping[Any, int]):
6252 columns = [k for k in self.variables if k not in self.dims]
-> 6253 data = [
6254 self._variables[k].set_dims(ordered_dims).values.reshape(-1)
6255 for k in columns
6256 ]
6257 index = self.coords.to_index([*ordered_dims])
6258 return pd.DataFrame(dict(zip(columns, data)), index=index)
File ~\anaconda3\Lib\site-packages\xarray\core\dataset.py:6254, in <listcomp>(.0)
6251 def _to_dataframe(self, ordered_dims: Mapping[Any, int]):
6252 columns = [k for k in self.variables if k not in self.dims]
6253 data = [
-> 6254 self._variables[k].set_dims(ordered_dims).values.reshape(-1)
6255 for k in columns
6256 ]
6257 index = self.coords.to_index([*ordered_dims])
6258 return pd.DataFrame(dict(zip(columns, data)), index=index)
MemoryError: Unable to allocate 29.7 GiB for an array with shape (86399, 4, 4, 2880, 2) and data type float32
</div> | closed | 2023-11-27T08:12:07Z | 2023-12-28T00:02:16Z | https://github.com/explosion/spaCy/issues/13154 | [] | nunu346 | 2 |
Anjok07/ultimatevocalremovergui | pytorch | 1,331 | UVR5 | Last Error Received:
Process: VR Architecture
If this error persists, please contact the developers with the error details.
Raw Error Details:
MemoryError: "Unable to allocate 1.64 GiB for an array with shape (219842879,) and data type float64"
Traceback Error: "
File "UVR.py", line 6638, in process_start
File "separate.py", line 1066, in seperate
File "separate.py", line 1205, in spec_to_wav
File "lib_v5\spec_utils.py", line 332, in cmb_spectrogram_to_wave
File "lib_v5\spec_utils.py", line 289, in spectrogram_to_wave
File "librosa\util\decorators.py", line 88, in inner_f
File "librosa\core\spectrum.py", line 431, in istft
"
Error Time Stamp [2024-05-11 10:55:05]
Full Application Settings:
vr_model: UVR-De-Echo-Aggressive
aggression_setting: 8
window_size: 1024
mdx_segment_size: 256
batch_size: Default
crop_size: 256
is_tta: False
is_output_image: False
is_post_process: False
is_high_end_process: False
post_process_threshold: 0.2
vr_voc_inst_secondary_model: No Model Selected
vr_other_secondary_model: No Model Selected
vr_bass_secondary_model: No Model Selected
vr_drums_secondary_model: No Model Selected
vr_is_secondary_model_activate: False
vr_voc_inst_secondary_model_scale: 0.9
vr_other_secondary_model_scale: 0.7
vr_bass_secondary_model_scale: 0.5
vr_drums_secondary_model_scale: 0.5
demucs_model: v4 | htdemucs
segment: Default
overlap: 0.25
overlap_mdx: Default
overlap_mdx23: 8
shifts: 2
chunks_demucs: Auto
margin_demucs: 44100
is_chunk_demucs: False
is_chunk_mdxnet: False
is_primary_stem_only_Demucs: False
is_secondary_stem_only_Demucs: False
is_split_mode: True
is_demucs_combine_stems: True
is_mdx23_combine_stems: True
demucs_voc_inst_secondary_model: No Model Selected
demucs_other_secondary_model: No Model Selected
demucs_bass_secondary_model: No Model Selected
demucs_drums_secondary_model: No Model Selected
demucs_is_secondary_model_activate: False
demucs_voc_inst_secondary_model_scale: 0.9
demucs_other_secondary_model_scale: 0.7
demucs_bass_secondary_model_scale: 0.5
demucs_drums_secondary_model_scale: 0.5
demucs_pre_proc_model: No Model Selected
is_demucs_pre_proc_model_activate: False
is_demucs_pre_proc_model_inst_mix: False
mdx_net_model: Reverb HQ
chunks: Auto
margin: 44100
compensate: Auto
denoise_option: None
is_match_frequency_pitch: True
phase_option: Automatic
phase_shifts: None
is_save_align: False
is_match_silence: True
is_spec_match: False
is_mdx_c_seg_def: False
is_invert_spec: False
is_deverb_vocals: False
deverb_vocal_opt: Main Vocals Only
voc_split_save_opt: Lead Only
is_mixer_mode: False
mdx_batch_size: 7
mdx_voc_inst_secondary_model: No Model Selected
mdx_other_secondary_model: No Model Selected
mdx_bass_secondary_model: No Model Selected
mdx_drums_secondary_model: No Model Selected
mdx_is_secondary_model_activate: False
mdx_voc_inst_secondary_model_scale: 0.9
mdx_other_secondary_model_scale: 0.7
mdx_bass_secondary_model_scale: 0.5
mdx_drums_secondary_model_scale: 0.5
is_save_all_outputs_ensemble: True
is_append_ensemble_name: False
chosen_audio_tool: Manual Ensemble
choose_algorithm: Min Spec
time_stretch_rate: 2.0
pitch_rate: 2.0
is_time_correction: True
is_gpu_conversion: True
is_primary_stem_only: True
is_secondary_stem_only: False
is_testing_audio: False
is_auto_update_model_params: True
is_add_model_name: False
is_accept_any_input: False
is_task_complete: False
is_normalization: False
is_use_opencl: False
is_wav_ensemble: False
is_create_model_folder: False
mp3_bit_set: 320k
semitone_shift: 0
save_format: WAV
wav_type_set: PCM_16
device_set: NVIDIA GeForce RTX 4070 Laptop GPU:0
help_hints_var: True
set_vocal_splitter: No Model Selected
is_set_vocal_splitter: False
is_save_inst_set_vocal_splitter: False
model_sample_mode: False
model_sample_mode_duration: 30
demucs_stems: All Stems
mdx_stems: All Stems | open | 2024-05-11T10:06:24Z | 2024-05-11T10:06:24Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1331 | [] | banduharisch | 0 |
JaidedAI/EasyOCR | deep-learning | 481 | BadZipFile error | 
Hi Guy
thanks for your great work!
Recently, I am using your program to make a tookit with a snap and ouput GUI interface. but see the attached image please. I can run the whole program at home, but I can't do it in my office. I don't know what happened, could you please help me out?
Thanks in advance. | closed | 2021-07-05T04:31:25Z | 2021-10-06T09:21:17Z | https://github.com/JaidedAI/EasyOCR/issues/481 | [] | Miaoqi-2010 | 3 |
TencentARC/GFPGAN | deep-learning | 176 | Some colors in black and white photo | A minor detail, in some black and white photos, colors appear that are not in the photo, but it seems that the model "suggests" what colors it should have. It is also remarkable the improvement of the V1.3 model. Although the 1.1 model has behaved very generous with this image. I must also add that some faces have been improved but with Asian features (like women and children).
Thanks for your project I love it
img1- Original
img2 -V1 Model (more natural and accurate face, but colorized(in this face))
img3-V1.3 added colors in BW pics

| open | 2022-03-13T08:45:33Z | 2022-03-14T23:23:38Z | https://github.com/TencentARC/GFPGAN/issues/176 | [] | GOZARCK | 2 |
Guovin/iptv-api | api | 972 | [Bug]:Docker运行问题 | ### Don't skip these steps | 不要跳过这些步骤
- [x] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field | 我明白,如果我“故意”删除或跳过任何强制性的\*字段,我将被**限制**
- [x] I am sure that this is a running error exception problem and will not submit any problems unrelated to this project | 我确定这是运行报错异常问题,不会提交任何与本项目无关的问题
- [x] I have searched and double-checked that there are no similar issues that have been created | 我已经通过搜索并仔细检查过没有存在已经创建的类似问题
### Occurrence environment | 触发环境
- [ ] Workflow | 工作流
- [ ] GUI | 软件
- [x] Docker
- [ ] Command line | 命令行
### Bug description | 具体描述
1.运行环境ESXI
(1)黑群晖 :版本 DSM 7.2.2-72806 Update 2
(2)Container Manager 版本:24.0.2-1535
(3)已安装的 docker 版本 :1.6.2
(4) 建立docker时添加 PUID:0 PGID:0
2.问题描述:
(1)近日更新到1.6.2版本之后,发现无法更新m3u文件
(2)我做的尝试:删除docker重新拉取镜像,删除原有iptv-api文件夹,新建IPTV文件夹作为新的映射目录并重新创先config和output目录
3.遇到的问题:
(1)docker中的终端打开后报错: error 无法连接。未发现电传终端机。
(2) 运行后映射文件夹config和output映射目录为空,未生成任何文件
(3)单独用浏览器访问需要拉取的m3u网页均可以正常打开(已开启用全局上网),自测网络未发现问题。
4.根据测试:docker 运行完成后可以通过IP+端口号可以下载result.m3u 文件,实际已生成了m3u文件,但是映射目录中看不到任何文件
5.升级1.6.2版本前使用版本为1.6.0
最后感谢大佬的付出!祝您万事如意!
### Error log | 报错日志
[2025-03-17 日志及截图文件等.zip](https://github.com/user-attachments/files/19282298/2025-03-17.zip) | closed | 2025-03-17T08:43:38Z | 2025-03-18T01:58:22Z | https://github.com/Guovin/iptv-api/issues/972 | [
"invalid",
"wontfix"
] | WuLongMiTaoLaiYiDa | 3 |
gunthercox/ChatterBot | machine-learning | 2,382 | Error in installing Chatterbot. | Collecting chatterbot
Using cached ChatterBot-1.0.5-py2.py3-none-any.whl.metadata (8.1 kB)
Collecting mathparse<0.2,>=0.1 (from chatterbot)
Using cached mathparse-0.1.2-py3-none-any.whl.metadata (776 bytes)
Requirement already satisfied: nltk<4.0,>=3.2 in c:\users\--\appdata\local\programs\python\python312\lib\site-packages (from chatterbot) (3.8.1)
Collecting pint>=0.8.1 (from chatterbot)
Using cached Pint-0.24.3-py3-none-any.whl.metadata (8.5 kB)
Collecting pymongo<4.0,>=3.3 (from chatterbot)
Using cached pymongo-3.13.0.tar.gz (804 kB)
Preparing metadata (setup.py) ... done
Collecting python-dateutil<2.8,>=2.7 (from chatterbot)
Using cached python_dateutil-2.7.5-py2.py3-none-any.whl.metadata (7.5 kB)
Collecting pyyaml<5.2,>=5.1 (from chatterbot)
Using cached PyYAML-5.1.2.tar.gz (265 kB)
Preparing metadata (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [37 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\--\AppData\Local\Temp\pip-install-5dv6vnfj\pyyaml_2ef76d984def475d9c0a5afc98636544\setup.py", line 291, in <module>
setup(
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\_distutils\core.py", line 183, in setup
return run_commands(dist)
^^^^^^^^^^^^^^^^^^
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\_distutils\core.py", line 199, in run_commands
dist.run_commands()
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\_distutils\dist.py", line 954, in run_commands
self.run_command(cmd)
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\dist.py", line 999, in run_command
super().run_command(command)
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\_distutils\dist.py", line 973, in run_command
cmd_obj.run()
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\command\egg_info.py", line 312, in run
self.find_sources()
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\command\egg_info.py", line 320, in find_sources
mm.run()
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\command\egg_info.py", line 541, in run
self.add_defaults()
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\command\egg_info.py", line 579, in add_defaults
sdist.add_defaults(self)
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\command\sdist.py", line 109, in add_defaults
super().add_defaults()
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\_distutils\command\sdist.py", line 238, in add_defaults
self._add_defaults_ext()
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\_distutils\command\sdist.py", line 323, in _add_defaults_ext
self.filelist.extend(build_ext.get_source_files())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\--\AppData\Local\Temp\pip-install-5dv6vnfj\pyyaml_2ef76d984def475d9c0a5afc98636544\setup.py", line 199, in get_source_files
self.cython_sources(ext.sources, ext)
^^^^^^^^^^^^^^^^^^^
File "C:\Users\--\AppData\Local\Programs\Python\Python312\Lib\site-packages\setuptools\_distutils\cmd.py", line 107, in __getattr__
raise AttributeError(attr)
AttributeError: cython_sources
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details. | closed | 2024-11-07T03:52:04Z | 2025-02-09T17:24:35Z | https://github.com/gunthercox/ChatterBot/issues/2382 | [] | Prajapati-Shubham | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.